id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
44118903
pes2o/s2orc
v3-fos-license
Efficacy and safety of alirocumab among individuals with diabetes mellitus and atherosclerotic cardiovascular disease in the ODYSSEY phase 3 trials Aims Individuals with both diabetes mellitus (DM) and atherosclerotic cardiovascular disease (ASCVD) are at very high risk of cardiovascular events. This post‐hoc analysis evaluated efficacy and safety of the PCSK9 inhibitor alirocumab among 984 individuals with DM and ASCVD pooled from 9 ODYSSEY Phase 3 trials. Materials and methods Changes in low‐density lipoprotein cholesterol (LDL‐C) and other lipids from baseline to Week 24 were analysed (intention‐to‐treat) in four pools by alirocumab dosage (150 mg every 2 weeks [150] or 75 mg with possible increase to 150 mg every 2 weeks [75/150]), control (placebo/ezetimibe) and background statin usage (yes/no). Results At Week 24, LDL‐C changes from baseline in pools with background statins were −61.5% with alirocumab 150 (vs −1.0% with placebo), −46.4% with alirocumab 75/150 (vs +6.3% with placebo) and −48.7% with alirocumab 75/150 (vs −20.6% with ezetimibe), and −54.9% with alirocumab 75/150 (vs +4.0% with ezetimibe) without background statins. A greater proportion of alirocumab recipients achieved LDL‐C < 70 and < 55 mg/dL at Week 24 vs controls. Alirocumab also resulted in significant reductions in non‐high‐density lipoprotein cholesterol, apolipoprotein B and lipoprotein(a) vs controls. Alirocumab did not appear to affect glycaemia over 78‐104 weeks. Overall safety was similar between treatment groups, with a higher injection‐site reaction frequency (mostly mild) with alirocumab. Conclusion Alirocumab significantly reduced LDL‐C and other atherogenic lipid parameters, and was generally well tolerated in individuals with DM and ASCVD. meta-analyses showing that treatment with statins reduces LDL-C levels and ASCVD risk in individuals with DM. [11][12][13] DM is commonly associated with diabetic dyslipidaemia, including elevated triglycerides and reduced levels of high-density lipoprotein cholesterol (HDL-C), and with an increased number of small dense LDL particles and apolipoprotein (apo) B-containing particles, which is thought to contribute to the increased risk level associated with DM. 3,14 Because of this, some guidelines have suggested using non-HDL-C, representative of the sum total of all atherogenic cholesterol-containing particles, as an alternative or secondary treatment target for LDL-C. 8,9,15 However, despite recent increases in the use of high-intensity statin therapy in practice, recent evidence indicates that many individuals with ASCVD and/or DM are not achieving LDL-C and non-HDL-C goals. 16 Further reduction in LDL-C and ASCVD events has been observed in individuals with DM and ASCVD when non-statin therapies, ezetimibe 17 or the proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitor evolocumab, 18 were added to statin therapy, compared with statins alone. The proportion of participants in these trials who experienced adverse events was comparable with that of controls. Based on these data, guidelines have been updated and now propose that adding ezetimibe and/or a PCSK9 inhibitor should be considered if the individual does not attain sufficient LDL-C reduction with maximally tolerated statins alone, for example, if they have insufficient response to statin therapy or are unable to tolerate high or any doses of statins. [7][8][9][10]15 Alirocumab is a PCSK9 inhibitor that signficantly reduced LDL-C and other atherogenic lipid parameters in participants with hypercholesterolaemia in Phase 3 ODYSSEY trials, [19][20][21][22][23][24][25][26] including dedicated trials involving individuals with DM who were receiving insulin therapy 27 or with mixed dyslipidaemia, 28 with a safety profile comparable to controls. Alirocumab has also been demonstrated to reduce major adverse cardiovascular events vs placebo in patients with recent acute coronary syndrome in the ODYSSEY OUTCOMES trial. 29 Subgroup analyses have suggested similar efficacy and tolerability of alirocumab in individuals with and without DM. 26,[30][31][32][33] However, it is important to examine the effects of alirocumab in the specific subgroup of individuals with both DM and ASCVD who are at particularly high risk and may benefit from additional lipid-lowering therapy beyond a statin. [7][8][9][10]15 This post-hoc analysis used pooled data from 9 ODYSSEY Phase 3 trials to evaluate the efficacy and safety of alirocumab in individuals with both DM and ASCVD. | Study designs and participants This post-hoc pooled analysis included individuals with a medical history of Type 1 or Type 2 DM and ASCVD who participated in 9 randomized, double-blind, placebo-or ezetimibe-controlled ODYSSEY For all but one of the trials included in this analysis, eligible participants with a history of ASCVD were required to have LDL-C levels ≥70 mg/dL at screening; in one trial (HIGH FH), eligible individuals had heterozygous familial hypercholesterolaemia with LDL-C levels ≥160 mg/dL at screening. ASCVD was defined as coronary heart disease, ischaemic stroke or peripheral arterial disease. 34 | Endpoints The primary efficacy endpoint was percentage change from baseline in LDL-C at Week 24, as in the primary trial analyses; 26 | Statistical analyses Data were analysed using the same statistical approaches as those used for the primary trial analyses. 26 Efficacy was analysed using an intent-to-treat (ITT) approach, including all patients with a baseline and at least one post-baseline LDL-C value, regardless of adherence to treatment, in pools as described above. Least-squares mean lipid values were derived from a mixed-effects model with repeated measures for lipids assumed to follow a normal distribution, and adjusted mean values were calculated from a multiple imputation, followed by robust regressions for lipids not following a normal distribution (ie, Lp(a) and triglycerides) as described previously. 26 The proportion of individuals achieving an LDL-C level < 70 or < 55 mg/dL was analysed using a modified ITT approach, including only on-treatment lipid values, using multiple imputation followed by a logistic regression. LDL-C < 55 mg/dL is a goal not previously specified for the ODYS-SEY trials but is assessed here following recent guideline updates from the American Association of Clinical Endocrinologists. 9 Descriptive statistics only were used for baseline and safety analyses; no formal statistical inference was planned in the original study protocols. The effects of treatment on glycated haemoglobin (HbA1c) and fasting plasma glucose (FPG) are presented for the placebo-and ezetimibecontrolled pools using descriptive statistics and graphs during the treatment period (ie, up to 21 days after the last injection). Statistical analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, North Carolina). | Baseline characteristics A total of 984 participants with DM and ASCVD from 9 ODYSSEY Phase 3 clinical trials were included in the analysis. Most had Type 2 DM (n = 969, 98.5%), with few having Type 1 DM (n = 15, 1.5%). The most common type of ASCVD was coronary heart disease (85%-100% of patients across the groups); most individuals (83%-94%) had hypertension (Table 1). Baseline characteristics were generally well balanced between alirocumab and control groups within the pools of studies using background statins (Table 1) (Table 1). Figure 2C). Moderate increases in HDL-C and moderate reductions in triglycerides were also observed with alirocumab, which were significant vs placebo, but not vs ezetimibe ( Figure S5). | Efficacy HbA1c levels were stable up to 78 weeks in both alirocumab and placebo arms in the placebo-controlled pool of studies ( Figure 3A). In the ezetimibe-controlled pool, stable HbA1c levels were maintained up to Week 104 in both alirocumab and ezetimibe arms ( Figure 3B). Similar trends were seen in FPG ( Figures 3C,D). In addition, stability in HbA1c and FPG levels with alirocumab and control was seen in all patients, irrespective of insulin use ( Figure S6). | Safety Overall safety was generally similar between alirocumab and control groups in the placebo-and ezetimibe-controlled pools ( Table 2). Myalgia and other muscle-related TEAEs occurred in <5% of alirocumab-treated patients, and occurred with a similar frequency in the control groups (Table 2). Injection-site reactions were reported by 5.0% and 2.7% of alirocumab-and placebo-treated patients in the which has been proposed to be an independent cardiovascular risk factor; however, other commonly used lipid-lowering strategies such as statins or ezetimibe have little or no effect on Lp(a). 39 The and previous sub-analyses that revealed no effect of alirocumab on glycaemic parameters 31,33,45 or no increase in new-onset DM compared with controls. 45 a Accidental or intentional administration of study drug at a frequency higher than that allowed by study protocol, if associated with an adverse event. b Local injection-site reactions were graded by severity and were characterized by related signs and symptoms such as (but not limited to) redness and pain. Severity was highest if an individual experienced several local injection site reactions. acute coronary syndrome, using alirocumab vs placebo, with exposure for up to 5 years (median exposure, 2.8 years). 29 Comparing corresponding pools from the population with DM and ASCVD vs the overall trial population, the alirocumab dose was
2018-06-07T12:58:06.277Z
2018-07-02T00:00:00.000
{ "year": 2018, "sha1": "d8c1c54715fd524019785157561638c4fb7fb90e", "oa_license": "CCBYNCND", "oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dom.13384", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d8c1c54715fd524019785157561638c4fb7fb90e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10144153
pes2o/s2orc
v3-fos-license
Reliability and performance of commercial RNA and DNA extraction kits for FFPE tissue cores Cancer biomarker studies often require nucleic acid extraction from limited amounts of formalin-fixed, paraffin-embedded (FFPE) tissues, such as histologic sections or needle cores. A major challenge is low quantity and quality of extracted nucleic acids, which can limit our ability to perform genetic analyses, and have a significant influence on overall study design. This study was aimed at identifying the most reliable and reproducible method of obtaining sufficient high-quality nucleic acids from FFPE tissues. We compared the yield and quality of nucleic acids from 0.6-mm FFPE prostate tissue cores across 16 DNA and RNA extraction protocols, using 14 commercially available kits. Nucleic acid yield was determined by fluorometry, and quality was determined by spectrophotometry. All protocols yielded nucleic acids in quantities that are compatible with downstream molecular applications. However, the protocols varied widely in the quality of the extracted RNA and DNA. Four RNA and five DNA extraction protocols, including protocols from two kits for dual-extraction of RNA and DNA from the same tissue source, were prioritized for further quality assessment based on the yield and purity of their products. Specifically, their compatibility with downstream reactions was assessed using both NanoString nCounter gene expression assays and reverse-transcriptase real-time PCR for RNA, and methylation-specific PCR assays for DNA. The kit deemed most suitable for FFPE tissue was the AllPrep kit by Qiagen because of its yield, quality, and ability to purify both RNA and DNA from the same sample, which would be advantageous in biomarker studies. Introduction The rapid evolution of technologies in cancer research has led to significant advances in our understanding of tumor genetics. Driven largely by high-throughput molecular technologies, a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 there is a growing body of "omics" level data, annotated with cancer phenotypes. Such data permit the molecular profiling of an individual patient's cancer, which is increasingly becoming more useful as disease management becomes more personalized [1]. Molecular biomarkers are emerging as a means of determining the prognoses of individual patients and predicting how individuals will respond to treatment, leading to increased research efforts in biomarker development [2]. A major consideration in biomarker development is the accrual of sample cohorts of sufficient size to permit rigorous statistical analyses during the discovery and validation phases. Currently, hospital-based pathology laboratories and biobanks are the best repositories of biospecimens linked to robust and relevant clinical and pathological information, enabling retrospective analysis of genotype-phenotype correlations. The ability to preserve morphologic and molecular information within these biospecimens allows one to use histopathological criteria, such as pathologic grade, stage, and histologic subtypes, as the basis for biomarker study design. More specifically, existing techniques for obtaining needle cores from formalin-fixed, paraffin-embedded (FFPE) tissues allow for harvesting of specific tumour grade or type, with minimal contamination by the stroma or other confounding tissue types. The promise of using archived FFPE tissues in biomarker discovery has made it more important than ever before to unlock the potential of this critical resource. However, archived FFPE tissues present many technical challenges in molecular analysis. Formalin fixation leads to crosslinking of nucleic acids to proteins and other cellular constituents, making the extraction of these analytes difficult. In addition, age-related changes in pH can lead to the oxidation of formalin to formic acid, causing base purination and strand breaks [3]. Thus nucleic acids recovered from FFPE tissue are typically fragmented, and their performance as substrates for enzyme-based assays, such as polymerase chain reaction (PCR) and sequencing, is unreliable [4]. Furthermore, the utility of nucleic acids from FFPE tissues may also be limited by contamination with inhibitors of downstream PCR-based applications [5,6]. Establishing a reliable and reproducible method of obtaining sufficient amounts of highquality nucleic acids from limited amounts of FFPE tissue remains a major challenge in many biomarker studies. Currently, several commercial kits are available for the extraction of RNA and DNA from FFPE tissue. While the manufacturer's quality control process ensures a consistent performance under given experimental conditions, each of these kits has distinct performance characteristics in terms of yield and purity. In this study, we characterize and compare the performance of six RNA, six DNA, and two dual-extraction (RNA+DNA) kits using FFPE prostate cancer tissue as input samples. Furthermore, we highlight the compatibility of the RNA and DNA extracted with these kits as analytes in PCR-and hybridization-based downstream applications. Tissue specimens Archived tissue was retrieved from the Department of Pathology archives of the Kingston General Hospital under the approval of the Queen's University Health Sciences Research Ethics Board. Tumour-rich regions of interest were identified on histopathology slides and harvested from paraffin blocks using a manual tissue microarray punch (Beecher Instruments, USA). Three representative cores from each case were digitally photographed with a phase contrast microscope (EVOS FL Cell Imaging System, ThermoFisher Scientific, USA), and the mean tissue volume per core for each case was calculated based on tissue lengths and diameters measured using ImageJ [7,8]. The harvested tissue cores were further processed for nucleic acid isolation as follows. To facilitate direct comparison between 14 commercial nucleic acid purification kits, we pooled and homogenized 120 cores from four different archived tissue blocks, totaling 37.2 mm 3 of tissue. The pooled tissue cores were prepared for homogenization by first being deparaffinized in xylene (2 x 5 min at room temperature), washed in 100% ethanol (2 x 5 min), and then air-dried. Cores were then suspended in 150 μL of fresh 100% ethanol per mm 3 of tissue and homogenized for 1 min at 10,000 rpm using a Power Gen Model 125 tissue homogenizer (ThermoFisher Scientific, USA). Nucleic acid isolation and quantification We compared the performance of eight RNA and eight DNA extraction protocols (from 14 commercial kits) using 0.68 mm 3 of the homogenized tissue. For each kit, the manufacturer's protocols were followed (see Table 1 for more details). Generally, the extraction process involved rehydration of the homogenized tissue, followed by protease digestion, binding to a solid substrate, washing, and elution, with variations specific to each kit/protocol. Three technical replicates were performed for each extraction kit. Two of the kits tested, RecAll (ThermoFisher Scientific) and AllPrep (Qiagen), are dualextraction kits, which permit extraction of RNA followed by DNA from the same input tissue. For these two kits, after the homogenized tissue was treated with Proteinase K and then centrifuged, the resulting tissue pellets were used as input for DNA extraction, while the supernatant was used for RNA extraction. For each extraction, the DNA and RNA yields were quantified on a Qubit 3.0 Fluorometer (ThermoFisher Scientific), using the dsDNA HS (High Sensitivity) and RNA BR (Broad-Range) Assay kits, respectively. The purity of the extracted nucleic acids was assessed by the A260/280 and A260/230 absorbance ratios obtained using a NanoDrop spectrophotometer. Inhibition assays During nucleic acid extraction, contamination with organic compounds can inhibit the utility of the nucleic acids in many downstream molecular applications. To quantify the inhibitory effect of contaminants in the RNA and DNA extracts obtained using the different kits, we conducted inhibition assays as previously described [5,6,9]. Briefly, standard real-time PCR reactions were set up using murine genomic DNA derived from the Ep4 cell-line as the template; a primer set specific to the HSD11β1 gene (S1 Table); and the PowerUp SYBR Green Master Mix (Thermo-Fisher Scientific, USA). Two microliters of either water (as a control) or extracted RNA or DNA was spiked into the above reaction mixture to yield a final reaction volume of 10 μL. The reaction mixture was treated with uracil-DNA glycosylase (50˚C, 2 min) and hot start (95˚C, 2 min) steps, then cycled through denaturation (95˚C, 15 sec) and annealing/extension (60˚C, 1 min) steps for 40 cycles on a ViiA7 Real-Time PCR System (Thermo Fisher Scientific). Assays were performed in duplicate for each extraction kit. The cycle thresholds (Cq) across kits were then plotted using the GraphPad Prism v7 software (GraphPad Software Inc., USA). The inhibitory effect was quantified as the difference in the mean Cq between reactions spiked with the water control and reactions spiked with the extracted nucleic acid. One-way ANOVA with Bonferroni's corrections was performed to compare for significant differences in Cq values. Assessment of the size distribution of RNA and DNA fragments We postulated that the chemicals used in the various extraction kits may affect the size distribution of the final nucleic acid product. The size distribution of RNA fragments within the extracts was assessed using the RNA 6000 Pico kit on a 2100 Bioanalyzer Lab-on-a-Chip platform (Agilent Technologies, USA), and expressed as the percentage of fragments greater than 200 base pairs (DV 200 ). Endpoint reverse transcriptase PCR (RT-PCR) was also performed to assess the size distribution of RNA fragments. More specifically, it was used to determine the amplifiable fragment length of RNA extracted using five select kits (RecAll, AllPrep, RNeasy, HPRNA, and PuLink), as a means of further assessing the downstream utility of the RNA. Six PCR primer pairs were designed to span exons 2 to 4 of the human beta-2-microglobulin mRNA (RefSeq NM_004048), with expected amplicon sizes ranging from 92 to 386 base pairs (bp) in approximately 50-bp increments (S1 Table). For each RT-PCR assay, RNA extracted using select kits was converted to cDNA using the SuperScript VILO cDNA Synthesis Master Mix (Thermo Fisher Scientific) according to the manufacturer's protocol. For each RNA sample, six endpoint PCR reactions were performed using 48 ng of template cDNA in a 20-μL reaction mixture consisting of 0.4 μM primer pair, 200 μM dNTP, 1.5 mM MgCl 2 , and 0.5 U Taq DNA polymerase (Thermo Fisher Scientific). The programmed profile of the PCR reaction consisted of initial denaturation at 95 o C for 3 min, followed by 40 cycles of denaturation at 95 o C (30 sec), denaturation at 55 o C (30 sec), and extension at 72 o C (1 min). To assess amplifiable fragment length of the extracted DNA, four primer pairs were designed flanking the exon 2-intron 2 junction of the human beta-2-microglobulin gene (RefSeq NG_012920.1), with expected amplicon sizes ranging from 102 to 300 bp in approximately 65-bp increments (S1 Table). PCR amplifications were conducted as described above for each of the primer pairs, in six singleplex reactions using 100 ng of DNA extracted from the different kits as templates. Reactions containing DNA from fresh PC-3 cells (American Type Culture Collection, Manassas, USA) and double-distilled water were included as positive and negative controls, respectively. Following amplification, PCR products from each singleplex reaction were pooled for each kit, and 30 μL was run on a 3.0% Tris-borate-ethylenediaminetetraacetic acid (TBE) agarose gel at 100 V for 90 min. The gel was then stained with ethidium bromide and visualized under ultraviolet illumination using a GelDoc2000 documentation system (Bio-Rad, USA). NanoString mRNA assay The NanoString platform was used to further quantify mRNA extracted from four select kits (RecAll, AllPrep, PuLink, and RNeasy) and one fresh (PC-3) cell line, whose RNA was extracted using RNeasy. Sample preparation and hybridization for the NanoString mRNA assay were performed according to the manufacturer's instructions. Briefly, 100 ng of input RNA was hybridized to NanoString 48-plex Customer Assay Evaluation (CAE) probes at 65˚C for 20 hr. The solution-phase hybridization products were then processed on the nCounter Preparation Station for automated removal of excess probe and immobilization of probe-transcript complexes on a streptavidin-coated cartridge. Barcoded signals were acquired using an nCounter TM DigitalAnalyzer from NanoString Technologies [10]. Data were analyzed using the nCounter™ digital analyzer software Version 2.1.1. Raw data were normalized to the geometric mean of spiked-in exogenous positive controls (to correct for differences resulting from assay efficiency such as hybridization, purification, binding, etc.), and then by subtracting the hybridization background [11,12]. The hybridization background is defined as all signals from the spiked-in negative controls that were below the mean background plus 2 standard deviations (SDs). Graphical analysis and one-way ANOVA with Bonferroni corrections were performed to assess for significant differences in mean mRNA counts between the four select RNA kits. RT-qPCR mRNA assays Like the NanoString mRNA assays, RT-qPCR mRNA assays were used to further quantify mRNA extracted from the four prioritized RNA kits (RecAll, AllPrep, PuLink, and RNeasy). One hundred nanograms of RNA from each kit was converted to cDNA using SuperScript VILO mastermix (Thermo Fisher Scientific). Three housekeeping genes (PGK1, KRT8, and HPRT1) were quantified by TaqMan-based RT-qPCR gene-expression assay kits (Thermo Fisher Scientific) using the ViiA7 qPCR thermocycler. Statistical analyses, including graphics and multiple comparisons with Bonferroni corrections, the latter using two-way ANOVA, were performed to assess for significant differences in mean Cq values between different kits across the three genes. Methylation-specific PCR DNA assay To assess the compatibility of DNA extracted using five prioritized protocols (AllPrep, QIAamp, RecAll, DNeasy, and GenJet) with enzyme-based downstream reactions, we performed methylation-specific PCR (MS-PCR), which is frequently employed in biomarker studies [13]. As preparation for MS-PCR, 100 ng of genomic DNA extracts was treated with sodium bisulfite, which converts unmethylated cytosine into uracil, and then column-purified according to the manufacturer's protocol (EpiMark bisulfite conversion kit; NEB, USA). MS-PCR assays were carried out as published in our previous study [14]. Briefly, 2 μL of the purified DNA was used in a 10-μL MS-PCR reaction involving amplification of targets in the bisulfite-converted CpG islands of the genes GSTP1, ABCB1 and RASSF1, all known to be hypermethylated in prostate cancer. Alu repeat elements were used as the positive control [14][15][16]. Multiple comparisons were performed using two-way ANOVA to assess significant differences in mean Cq values between different kits for each MS-PCR gene target. Reproducibility of the AllPrep protocols The reproducibility of the AllPrep extraction protocols was tested across three independent laboratories based on serial extractions of RNA and DNA from 12 FFPE prostate cancer samples. Briefly, nine cores were harvested from each of the 12 samples using a 0.6-mm Estigen Punch Set (Estigen, Estonia). To ensure uniformity in the quantity and quality of the input tissue, the harvested cores were pooled and homogenized. They were then distributed in equalvolume aliquots to participating laboratories, where RNA and DNA were extracted using the AllPrep protocols and assessed for yield and compatibility with downstream molecular applications (NanoString and MS-PCR) using the methods described above [14]. Inter-laboratory variability was determined by one-way ANOVA. Results and discussion In this study, we compared the nucleic acid yield and quality from FFPE tissue cores across six DNA and six RNA extraction kits, as well as two dual-extraction kits (DNA/RNA) ( Table 1). Based on their nucleic acid yield and purity, four RNA and five DNA extraction protocols were prioritized, and their utility in downstream molecular applications was further assessed using NanoString and reverse-transcriptase quantitative PCR for RNA, and methylation-specific PCR assays for DNA. Of the dual-extraction kits (AllPrep and RecAll), Qiagen's AllPrep kit was selected as the optimal one for assessing the reproducibility of extraction protocols across three independent research laboratories. Nucleic acid yield and purity The two dual-nucleic acid (RNA/DNA) extraction kits, AllPrep and RecAll, yielded 2,512.08 and 2,249.95 ng of RNA per mm 3 tissue, respectively. The two RNA-only kits with the highest yields were the PuLink (3,603.48 ng/mm 3 ) and RNeasy (2,713.04 ng/mm 3 ) kits, followed by HPRNA, EZNRNA, NorRNA, and NucRNA. By spectrophotometric assessment, all of the RNA extraction kits produced A 260/280 ratios close to 2.0, consistent with highly "pure" samples. In contrast, with the exception of the RNeasy kit, all of the RNA extraction kits produced A 260/230 ratios that indicated significant impurities (Table 2). Similarly for DNA, the A 260/280 ratios were near or above 1.8 for all kits. In contrast, the A 260/230 ratios for the RecAll, AllPrep, QIAamp, and DNeasy kits indicated potential organic contaminants in the eluent (Table 2). Contaminants such as EDTA, phenol, heme, and carbohydrates all have absorbances near 230 nm [5,9,[17][18][19]. Given that these contaminants can inhibit downstream applications [4,20], we undertook a series of assays to quantify their inhibitory effect [6,9]. Inhibition effects on downstream applications The inhibition assay was designed to quantify the impact of any inhibitors in eluted RNA and DNA samples on downstream applications. Extracted RNA and DNA were spiked into a nontarget PCR reaction, and the observed delay in the Cq value of the RNA-or DNA-spiked reaction relative to that of the control (water-spiked) was interpreted as the extent of the inhibitory effect. One-way ANOVA with Bonferroni's correction showed that none of the RNA tested had any effects on the Cq, relative to the control, indicating that the RNA was pure (Fig 1A; p > 0.05). On the other hand, a significant difference was observed between the Cq values for the AllPrep DNA-spiked reactions and the control (Fig 1B, p < 0.05). The other DNA extraction kits showed no significant differences in Cq values compared to the control reaction. To further evaluate the inhibitory effect of the AllPrep DNA on PCR reactions, we carried out additional inhibition assays using independent prostate and breast cancer samples (see S4 Fig and S2 Table). Specifically, DNA was extracted from the respective samples using the All-Prep protocol, and then spiked into non-target PCR reactions at full concentration, at a 1:10 dilution, and at a 1:20 dilution. At full concentration, significant delays in Cq were observed for both of the prostate cancer samples, but not for the breast cancer sample. With both prostate cancer samples, the delays were reversed by a ten-fold dilution of the DNA prior to spiking. Using the AllPrep kit on needle cores of FFPE tissue, a typical extraction yielded DNA concentrations ranging from 20 to 50 ng/μL. Thus, in our inhibition assay, the concentration of the spiked DNA in the reaction mixture would be in the range of 0.2 to 0.5 ng/μL of final reaction volume. This is well within the range of template DNA concentrations for most PCRbased applications. Thus, while the AllPrep DNA contains impurities that can inhibit PCR reactions, these are sufficiently diluted in routine use to nullify any inhibitory effect. Assessment of the size distribution of RNA and DNA fragments Practical considerations in biomarker development include ease of specimen procurement, biomarker stability under archived conditions, and compatibility of the assay with established clinical workflow. Currently, FFPE tissues are the standard specimens for diagnosis and represent a vast repository of research specimens linked with long-term clinical follow-up data. Although past studies have demonstrated utility for nucleic acids extracted from archived specimens in genomic analyses [21], it is well documented that their degradation into small fragments pose technical challenges for molecular methods [4]. Formalin-fixation leads to the formation of crosslinks which increase sensitivity of the strands to mechanical stress and decrease accessibility of polymerases and other enzymes [22]. We postulated that the chemicals employed in the various extraction kits may affect the size distribution of the final nucleic acid product. We evaluated the size distribution of RNA products using the Bioanalyzer platform. Given that many downstream methods used for genotyping and Next Generation Sequencing are designed for templates > 150 bp, we quantified the amount of nucleic acid fragments > 200 bp as the percentage corrected area under the electropherogram. We found a wide range of DV 200 values from 70% (NorRNA) to 10% (NucRNA). Notably, AllPrep and RecAll, the two kits that permit dual extraction of DNA and RNA, had DV 200 values of 54% and 43%, respectively (Table 2). Among RNA-only extraction kits, Fig 1. Inhibition assays. Inhibition assays were set up as qPCR reactions using murine genomic DNA; a primer set specific to the mouse-HSD11β1 gene (S1 Table); and the PowerUp SYBR Green Master Mix. The RNeasy extracts yielded the largest fragment distribution peak in the electropherogram (S1 Fig). It is important to note that DV 200 values represent relative, rather than absolute, amounts of fragments > 200 bp and thus do not necessarily reflect the performance of nucleic acids for use in downstream reactions such as PCR. To assess the compatibility of the nucleic acids in PCR reactions more directly, we performed a series of endpoint PCR assays. As expected, RNA and DNA templates prepared from cultured cells yielded considerably stronger bands compared to templates prepared from equal amounts of RNA or DNA extracted from FFPE tissues (Fig 1C). Products from all five RNA protocols produced visible bands at 92, 142, 200, 248, and 303 bp, with the RecAll, AllPrep, and HPRNA products showing the highest band intensities. At 386 bp, only the RNA extracted using the RecAll and AllPrep kits produced appreciable bands (Fig 1C). For the five DNA protocols, discernible bands were seen at all amplicon sizes (102, 165, 225, and 300 bp), with slightly weaker intensities for DNeasy (Fig 1C). The utility of end-point PCR-based assays for assessing the quality of RNA and DNA from FFPE tissue has been demonstrated previously [23,24]. In these assays, the relative intensities of the various sized bands for a given sample reflect the size distribution for that sample, while the differences in the band intensities between the samples for a given primer pair reflect the varying extent of contamination. In the current study, no clear correlation was observed in the fragment size distribution as determined by end-point PCR versus Bioanalyzer analysis. It is noteworthy, however, that extensive fragmentation of both RNA and DNA, which is typical for FFPE samples, made the interpretation of the Bioanalyzer electropherogram unreliable (S1 Fig). Based on yield and purity, five DNA protocols (AllPrep, RecAll, DNeasy, QIAamp, and GenJet) and four RNA protocols (AllPrep, RecAll, PuLink, and RNeasy) were prioritised for further quality assessment. Both dual-extraction kits (AllPrep and RecAll) were included based on their capacity to extract both RNA and DNA from the same tissue source, which provides significant advantage in biomarker studies. The prioritised kits were further evaluated using NanoString, RT-qPCR expression, and MS-PCR assays. mRNA assessment by NanoString and RT-qPCR The NanoString platform has previously been used for direct, digital quantitation of specific mRNA transcripts through hybridization to two sequence-specific, color-coded probes [12]. As this technology is strictly hybridization-based, it avoids the use of reverse transcription and amplification, and thereby eliminates potential amplification bias common to PCR. The RNA levels determined using NanoString are likely to be more accurate since this assay allows direct detection with molecular barcodes. NanoString-based systems are sensitive, reproducible, and highly multiplexable for detecting nucleic acid targets across all levels of biological expression levels [10,25], and are becoming increasingly common in biomarker studies. RNA extracted from four select kits and one fresh (PC-3) cell line were run on the Nano-String platform, and the mean differences between total mRNA counts were compared by one-way ANOVA. Pairwise comparisons between RNA extracted from the cell line and each of the four selected kits showed no significant difference in total signal counts (Fig 2A and S2 Fig). By RT-qPCR, RNA from all four prioritized RNA protocols were successfully used to amplify the three house-keeping genes tested (PGK1, KRT8 and HPRT1), showing the overall compatibility of these protocols with downstream gene expression assays. Of these, the AllPrep protocol yielded significantly higher Cq values for all three genes (two-way ANOVA, p < 0.0001; Fig 2B). Given that the size distribution of AllPrep RNA compared favourably to those of RecAll and PuLink by both DV 200 and end-point PCR, the higher Cq values (i.e., reduced amplification) cannot be attributed to the fragment size. Rather, the data suggest the presence of contaminants in AllPrep RNA that have an inhibitory effect on the PCR reaction, consistent with the delayed Cq also observed with AllPrep DNA in the inhibition assays. Methylation-specific PCR (MS-PCR) DNA base modifications, especially methylation of cytosine in the CpG islands, play a critical role in the regulation of gene expression. MS-PCR is a robust method to detect cytosine methylation. It involves the conversion of unmethylated cytosine into uracil, which is subsequently detected by PCR amplification using primers specific to the conversion products. We performed MS-PCR on DNA extracted using the five prioritized protocols to assess whether these DNA extracts are compatible with this downstream application. We targeted three genes known to be hypermethylated in prostate cancer, namely GSTP1, ABCB1, and RASSF1, along with the Alu repeat sequences as a positive control [16,26,27]. DNA from the AllPrep, DNeasy, and GenJet kits resulted in similar Cq values (Fig 2C). Compared to the other kits, and using equivalent amount of DNA input, RecAll and QIAamp had significantly higher Cq values for each of the three gene targets (two-way ANOVA, p < 0.05). Moreover, using RecAll-purified DNA samples, GSTP1 was not amplified at all, and ABCB1 and RASSF1 showed inconsistent amplification (Fig 2C). These results indicated that AllPrep DNA was most compatible with the methylation-specific PCR protocol used in this study. Reproducibility of the AllPrep extraction protocol Inter-laboratory validation of a molecular protocol or assay is performed by molecular pathology laboratories to ensure its reproducibility and accuracy, which are critical components of competent patient care [28][29][30]. We aimed to identify the optimal RNA and DNA extraction protocols and then determine whether they are reproducible across multiple laboratories. No single kit uniformly outperformed the others in all of the criteria compared, including the yield, purity, and compatibility with downstream applications. It is noteworthy, however, that the two dual-extraction kits tested reduce the amount of tissue required for analysis of both nucleic acid types by half, and further obviates concerns about matching RNA and DNA samples for integrated molecular profiling. Although both kits yielded RNA and DNA in quantities comparable to those of the dedicated RNA and DNA kits, the RNA product of All-Prep was of higher purity and longer in fragment length than that of RecAll by spectrophotometric measurements and DV 200 , respectively (Table 2). Furthermore, DNA extracted using RecAll performed inconsistently in MS-PCR, thus excluding MS-PCR as a potential downstream assay for RecAll, whereas this was not the case for AllPrep. While the data indicate that AllPrep yields RNA and DNA products with contaminants that interfere with PCR amplification at higher concentrations, this inhibitory effect is negligible at the dilutions typically used for templates in PCR reactions. Ultimately, the choice of kit for nucleic acid isolation must be based on the overall design of the biomarker study. We placed a high priority on the metrics routinely used in preanalytical quality control, such as spectrophotometric absorbance and fragment distribution, and selected Qiagen's AllPrep DNA/RNA FFPE kit as the optimal one for assessing the reproducibility of extraction protocols between three laboratories. One-way ANOVA analysis did not detect significant differences between laboratories in yield of DNA or RNA (p > 0.05, Fig 3A), or in the absorbance at 280, 260, and 230 nm (results not shown). The results of the MS-PCR (S3C Fig) and NanoString (S3D Fig) assays using nucleic acids extracted independently by the three laboratories were highly correlated (all Pearson's R 2 ! 0.94; p < 0.0001). Thus, the AllPrep kit produced nucleic acids that strongly correlated between the three laboratories in yield, quality, and compatibility with downstream applications. These results support a previous observation that the AllPrep kit provides reproducible protocols for the extraction of nucleic acids that are suitable for typical molecular downstream applications [14]. Performance of FFPE RNA/DNA extraction kits Effect of sample age on yield and downstream molecular applications The 12 FFPE samples used in the inter-laboratory study on the AllPrep kit ranged in age from 7 to 15 years. Pearson's correlation analysis of the data failed to detect any correlation between the sample age and the yield (Fig 3B). Likewise, sample age was not significantly correlated with MS-PCR Cq values (Fig 3C) or with NanoString total mRNA counts (Fig 3D, p ! 0.05). While these results fail to implicate age as affecting the quality and quantity of the nucleic acids, the age of the sample as well as storage conditions are suspected to contribute to cumulative degradation of RNA and DNA through a time-dependent decrease of pH [23,31], thereby potentially negatively affecting biomarker studies. Other factors, such as fixation conditions, type of fixative, embedding procedure, and tissue type, typically remain constant for a given study. Bar graph (mean ± SD) comparing the yields of DNA and RNA extracted from 12 FFPE samples (circles) in three independent laboratories, using the AllPrep kit. (B) Correlation plot of DNA and RNA yield from the same 12 samples, as a function of age of sample. Each data point represents the yield for a given sample, extracted at a given laboratory, superimposed on a linear regression line. Correlation of sample age with MS-PCR amplification cycle thresholds (C) or total mRNA counts in a NanoString assay (D), based on a representative set of genes assayed in each case. Each data point represents the Cq value or the total mRNA count for a given sample, extracted at a given laboratory, superimposed on a linear regression line. Detailed data and statistical analyses are presented in the supplementary S2 Table. https://doi.org/10.1371/journal.pone.0179732.g003 Limitations of this study This study has some notable limitations. Kit performance was tested on cores but not on tissue sections, which may be better suited for certain applications. Nevertheless, cores are a popular and efficient method of selecting tissue of interest for molecular assays. In addition, the headto-head kit comparison implemented was based on relatively recent tissue samples. Older samples were evaluated with the AllPrep kit but not with any of the other kits. Furthermore, the bulk of the data reported here derives only from prostate tissue samples. However, the AllPrep protocol yields similar results in other tissue types (S4 Fig) [14]. Conclusions In this study, we compared nucleic acid yield and quality across 16 DNA and RNA extraction protocols from 14 commercial kits. All the protocols tested yielded nucleic acid quantities that were compatible with downstream molecular applications, although their performances in these applications varied widely. Based on nucleic acid yield and purity, a selection of RNA (RecAll, AllPrep, PuLink, and RNeasy) and DNA (RecAll, AllPrep, QIAamp, DNeasy, and GenJet) extraction protocols were prioritized for further evaluations using methylation-specific PCR, NanoString, and reverse-transcriptase quantitative PCR assays. The data herein provide the necessary metrics to guide the selection of a protocol that best suits the needs of the overall study design in terms of the quantity of available tissue, and the anticipated downstream applications. Overall, the AllPrep protocol reproducibly yields high quantities of matched RNA and DNA from the same tissue source. While the impurities of nucleic acids extracted using the AllPrep kit appear to impact PCR-based methods at high concentration, this effect is negligible at dilutions typical of templates in PCR-based assays. Taken together, the AllPrep kit was our preferred method for preparing nucleic acids for downstream epigenetic and gene expression studies. Table for more detailed data and statistical analysis. (TIF) S1 Table. Primer pair names, sequences, and resulting PCR amplicon sizes. The table shows six specific primer pairs (CP1-6) for the Homo sapiens β-2-microglobulin (B2M) mRNA (RefSeq NM_004048), four primer pairs for B2M DNA (qP1-4), and one primer pair specific for the amplification of HSD11B1 DNA. Also included are the expected PCR amplicon sizes of the respective primer pairs. (XLSX) S2 Table. Excel book with statistical analyses used in the assessment of DNA and RNA extraction kits. This Excel book contains data used to generate figures, and statistical analyses used in data comparisons. Each tab represents data for each figure or panel, as labeled. All statistical tests reported in this study were done using the GraphPad Prism v7 software (Graph-Pad Software Inc., USA).
2018-04-03T00:11:14.124Z
2017-06-22T00:00:00.000
{ "year": 2017, "sha1": "800738b67fa90cfa1793a6625703b33d3ddd5fbc", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0179732&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "800738b67fa90cfa1793a6625703b33d3ddd5fbc", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
4592769
pes2o/s2orc
v3-fos-license
Rhythm and Melody Tasks for School-Aged Children With and Without Musical Training: Age-Equivalent Scores and Reliability Measuring musical abilities in childhood can be challenging. When music training and maturation occur simultaneously, it is difficult to separate the effects of specific experience from age-based changes in cognitive and motor abilities. The goal of this study was to develop age-equivalent scores for two measures of musical ability that could be reliably used with school-aged children (7–13) with and without musical training. The children's Rhythm Synchronization Task (c-RST) and the children's Melody Discrimination Task (c-MDT) were adapted from adult tasks developed and used in our laboratories. The c-RST is a motor task in which children listen and then try to synchronize their taps with the notes of a woodblock rhythm while it plays twice in a row. The c-MDT is a perceptual task in which the child listens to two melodies and decides if the second was the same or different. We administered these tasks to 213 children in music camps (musicians, n = 130) and science camps (non-musicians, n = 83). We also measured children's paced tapping, non-paced tapping, and phonemic discrimination as baseline motor and auditory abilities We estimated internal-consistency reliability for both tasks, and compared children's performance to results from studies with adults. As expected, musically trained children outperformed those without music lessons, scores decreased as difficulty increased, and older children performed the best. Using non-musicians as a reference group, we generated a set of age-based z-scores, and used them to predict task performance with additional years of training. Years of lessons significantly predicted performance on both tasks, over and above the effect of age. We also assessed the relation between musician's scores on music tasks, baseline tasks, auditory working memory, and non-verbal reasoning. Unexpectedly, musician children outperformed non-musicians in two of three baseline tasks. The c-RST and c-MDT fill an important need for researchers interested in evaluating the impact of musical training in longitudinal studies, those interested in comparing the efficacy of different training methods, and for those assessing the impact of training on non-musical cognitive abilities such as language processing. Measuring musical abilities in childhood can be challenging. When music training and maturation occur simultaneously, it is difficult to separate the effects of specific experience from age-based changes in cognitive and motor abilities. The goal of this study was to develop age-equivalent scores for two measures of musical ability that could be reliably used with school-aged children (7-13) with and without musical training. The children's Rhythm Synchronization Task (c-RST) and the children's Melody Discrimination Task (c-MDT) were adapted from adult tasks developed and used in our laboratories. The c-RST is a motor task in which children listen and then try to synchronize their taps with the notes of a woodblock rhythm while it plays twice in a row. The c-MDT is a perceptual task in which the child listens to two melodies and decides if the second was the same or different. We administered these tasks to 213 children in music camps (musicians, n = 130) and science camps (non-musicians, n = 83). We also measured children's paced tapping, non-paced tapping, and phonemic discrimination as baseline motor and auditory abilities We estimated internal-consistency reliability for both tasks, and compared children's performance to results from studies with adults. As expected, musically trained children outperformed those without music lessons, scores decreased as difficulty increased, and older children performed the best. Using non-musicians as a reference group, we generated a set of age-based z-scores, and used them to predict task performance with additional years of training. Years of lessons significantly predicted performance on both tasks, over and above the effect of age. We also assessed the relation between musician's scores on music tasks, baseline tasks, auditory working memory, and non-verbal reasoning. Unexpectedly, musician children outperformed non-musicians in two of three baseline tasks. The c-RST and c-MDT fill an important need for researchers interested in evaluating the impact of musical training in longitudinal studies, those interested in comparing the efficacy of different training methods, and for those assessing the impact of training on non-musical cognitive abilities such as language processing. INTRODUCTION Researchers, music teachers, and parents have a strong interest in understanding and assessing children's musical abilities. However, measuring these abilities in childhood can be a challenge because training and normal maturation occur simultaneously, making it difficult to disentangle the effects of music experience from cognitive and motor development (Galván, 2010;Corrigall and Schellenberg, 2015). This also makes comparisons with adult musicians problematic. Therefore, the goals of this study were to develop measures of musical ability that could be reliably used with school-aged children (7)(8)(9)(10)(11)(12)(13), and to generate a set of age-based scores for children with and without training. The resulting children's Rhythm Synchronization Task (c-RST) and children's Melody Discrimination Tasks (c-MDT) were based on two tasks previously used with adults (RST; Chen et al., 2008;MDT, Foster and Zatorre, 2010a). For both tasks, we assessed whether children's patterns of performance would be similar to adults across levels of difficulty, whether performance would be better for children with music training, and whether scores would increase with age. Using the age-normed scores derived from the non-musician sample, we also assessed the contributions of years of music training to performance, and the possible relationships between music and cognitive abilities, including auditory working memory. Musical ability is defined as the innate potential to perceive, understand, and learn music (Law and Zentner, 2012;Schellenberg and Weiss, 2013). It is assumed that, like other innate capacities, musical abilities are normally distributed in the population (Schellenberg and Weiss, 2013), and that even without musical training these abilities develop with age (Stalinski and Schellenberg, 2012). In the first year, infants can discriminate between simple rhythm patterns and meters (Hannon and Johnson, 2005). Producing synchronized movement takes longer to master. Children as young as four can tap to a beat, and this ability improves between 4 and 11 years old (Drake et al., 2000). Existing evidence shows that by age 7 children can reproduce very short rhythms (Drake, 1993;Drake et al., 2000;Repp and Su, 2013). Children become more sensitive to the metrical structures of their culture with exposure to music (Corrigall and Schellenberg, 2015), and by adulthood are better at detecting changes in rhythms with a metrical structure specific to their culture (Hannon and Trehub, 2005). Basic melody discrimination is in place very early in life. Even before birth, near-term fetuses can detect a change in pitch of roughly an octave (Lecanuet et al., 2000). By 2 months old infants can discriminate between semitones, and they can process transposed songs, a more cognitively demanding task, by early childhood Trainor, 2005, 2009). The brain's response to auditory stimuli has a relatively long developmental timeframe, continuing to mature until 18-20 years old (Ponton et al., 2002). As children move through the school years they are more sensitive to aspects of music specific to their culture (Corrigall and Schellenberg, 2015). Implicit knowledge of key membership is acquired first, followed by implicit knowledge of harmony (Lynch et al., 1990;Trainor and Trehub, 1994;Schellenberg et al., 2005). Explicit knowledge of key membership and harmony begins around 6 years old and continues to develop until 11 years old (Costa-Giomi, 1999). School-aged children with musical training-even as little as 1-3 years-have been found to score higher on musical tasks than those with no training. Longitudinal and quasi-experimental studies provide the most compelling evidence for the effects of musical training on musical abilities. Six-year-olds who received 15 months of keyboard lessons improved on a combined melodic and rhythmic discrimination score compared to controls (Hyde et al., 2009). In a sample of children aged 7-8, rhythm and tonal discrimination improved significantly more after 18 months of musical training than after science training (Roden et al., 2014b). In another study, children were followed from ages 7-13; those with music training showed better detection of deviant musical stimuli, as measured with the mismatch negativity ERP response (Putkinen et al., 2013). Most recently, children aged 6-8 were given group music lessons, group soccer training, or no training for 2 years (Habibi et al., 2016). The musically trained children were the most accurate at discriminating changes in pitch. The earliest tests for measuring children's musical ability included both perceptual tasks such as discriminating among pitches or timbres, and motor tasks such as controlling tempo while singing (Seashore, 1915). Subsequent batteries have focused more on perceptual tasks, perhaps due to the difficulty of administering and evaluating children's musical performance objectively. The most recent and well-known batteries of music perception with age-equivalent scores for school-aged children are the Primary and Intermediate Measures of Music Audiation (PMMA and IMMA; Gordon, 1979Gordon, , 1986. The PMMA and IMMA are commonly used in research, given that there are norms for children in different age groups. However, these norms have not been updated for three to four decades. Thus, cohort effects related to changes in music-listening and in cognitive variables known to be related to musical abilities may make these norms less valid for current use (Nettelbeck and Wilson, 2004). More recent test batteries include the Montreal Battery of Evaluation of Musical Abilities (MBEMA; Peretz et al., 2013), which was administered to a large sample of Canadian and Chinese children aged 6-8. Like the PMMA and IMMA, the MBEMA consists of perceptual discrimination tasks (contour, scale, interval, and rhythm), with an added memory task. Although scores are reported for children with up to 2 years of musical training, the test was designed to identify amusia (an auditory-processing deficit), and as such may not be sensitive enough to detect differences in ability between children with and without training, or changes with age. Most recently, researchers developed a battery of tests of music perception, standardized on over 1,000 Brazilian schoolchildren aged 7-13 (Barros et al., 2017). Test scores showed no correlations with age, indicating that the task may not be useful in a developmental context. In addition, no musically-trained children were included in the sample. In sum, children's musical abilities appear to change with age, and are influenced by musical training. It also appears that, overall, s rhythm synchronization and melody discrimination abilities emerge at different ages, with melodic abilities developing earlier. Further, more modern tests of musical abilities in children may be limited in their utility for examining the effects of development and training. Given the increased interest in assessing musical skills in childhood, an important goal of this study is to provide the community with reliable tests with up-to-date scores accounting for the influence of age. Cognitive abilities such as working memory and non-verbal reasoning change with age, and are associated with both musical training and with musical aptitude (Schellenberg and Weiss, 2013;Swaminathan et al., 2016). Even after very little training, children score higher on age-equivalent measures of immediate and short-term working memory (Bergman Nutley et al., 2014;Roden et al., 2014a). In a well-known longitudinal study, children's scores on tests of global cognitive function increased after 36 weeks of music lessons, when compared to art lessons or no lessons (Schellenberg, 2004). In addition, there is evidence of associations between musical and language abilities (Patel, 2012;Gordon et al., 2015a). For instance, melody perception and language comprehension are strongly correlated by age 5 (Sallat and Jentschke, 2015), and young children's ability to detect large deviations of pitch in speech were found to improve after only 8 weeks of music lessons (Moreno and Besson, 2006). By age 6, children's rhythmic perceptual abilities are predictive of their ability to produce complex grammatical structures (Gordon et al., 2016). In children with lower SES, small amounts of music lessons may have a protective effect on literacy skills, compared to control subjects (Slater et al., 2014). Given the complex overlap between musical, cognitive, and language skills, and their relation to music training, in the current study we administered tests of auditory working memory and global cognitive function. The tests of musical ability developed for the current study are based on adult tasks. Both tasks were abbreviated and simplified to be more engaging and have a shorter administration time. The children's Rhythm Synchronization Task (c-RST; Figure 1) and children's Melody Discrimination Task (c-MDT: Figure 2) were adapted following guidelines advanced by Corrigall and Schellenberg (2015), including adding a storyline, reducing test duration, and providing feedback. The Rhythm Synchronization Task (RST) is a computerbased task that assesses the ability to tap in synchrony to a series of rhythms that vary in metrical complexity. It is based on an adult task initially developed for brain imaging and then modified for behavioral studies (Chen et al., 2008). Adult professional musicians scored higher than non-musicians on the RST Penhune, 2010, 2012;Karpati et al., 2016). Moreover, irrespective of training, scores decreased as metric regularity (indicated by the presence of a steady pulse) decreased (Chen et al., 2008;Bailey and Penhune, 2010;Matthews et al., 2016). The RST was recently adapted for children, with the purpose of comparing typically developing children and those with autism spectrum disorder (Tryfon et al., 2017). The Melody Discrimination Task (MDT) is a computer-based task that assesses the ability to discriminate between two melodies that differ by one note either in the same key or transposed. Adult musicians outperformed non-musicians on this task (Foster and Zatorre, 2010a;Karpati et al., 2016) and scores are related to length of musical training (Foster and Zatorre, 2010b). For the current study this task was shortened, and a storyline added, for use with children. Items were selected for optimal reliability and difficulty. The goal of the present study is to assess the influence of age and musical training on children's musical abilities using the RST and MDT, two tasks widely used with adults. Considering the different paradigms of these two tasks (i.e., RST, a production task, and MDT, a perceptual task), and the likely differences in developmental trajectories of the rhythmic and melodic abilities measured, we assess rhythm and melody separately. We provide standardized scores for each age group, and use these scores to investigate the effects of musical training on task performance. Finally, we assess the relation between musical, baseline and cognitive abilities in musically trained children. Participants We tested 213 children aged 7-13 years in music and science camps in Montréal, Ottawa, and Waterloo, Canada. Children were categorized as musicians (n = 130) or non-musicians (n = 83) based on a parent questionnaire adapted in our lab (Survey of Musical Interests; Desrochers et al., 2006). The term musician was operationalized as a child who had at least 2.5 years of consecutive music lessons (M = 5.06 years, SD = 1.58, range 2.74-10.00). Music lessons were operationalized as extracurricular, weekly, one-on-one sessions of at least 30 min in duration and taught by an expert. Child musicians also practiced for at least half an hour a week (M = 3.16 h, SD = 2.49, range = 0.50-14.00). Music practice could be structured (using a book or specific exercises) or unstructured (free playing), as long as it occurred outside of lessons and on the same instrument. The term non-musician was operationalized as a child with no more than 2.5 years of consecutive lessons (M = 0.43, SD = 0.74, range 0.00-2.30). We assessed children's SES by estimating maternal years of education. As in the original questionnaire, mothers reported their highest level of education on an ordinal scale. We converted this to an approximate interval scale with the following estimates: high school = 12 years; college diploma = 14 years; baccalaureate degree = 16 years; master's degree = 18 years; doctorate or medical professional degree = 22 years. Demographic and practice-related characteristics for all children by musicianship and age group are in given in Table 1. Parents provided written consent and children provided verbal assent before participating. Children were given a gift card and a small toy as thanks for their participation. The study was approved by Concordia University's Human Research Ethics Board. Rhythm Synchronization Task The child version of the RST (c-RST; Figure 1) differs from the adult task in several ways (Tryfon et al., 2017). First, to make it more engaging, a storyline and corresponding graphics were generated. Next, task difficulty was reduced by removing the most difficult ("non-metric") rhythm level, and replacing it with an easy ("strongly metric") level. Thus, the c-RST has three levels of rhythmic complexity that vary in difficulty from easiest to hardest: Strongly Metric, Medium Metric, and Weakly Metric. There are two rhythms per difficulty level, for a total of six rhythms which are presented in counterbalanced order. Rhythms were matched for number of notes; each rhythm consists of 11 woodblock notes spanning an interval of 4-5.75 s, including rests. As with the adult task, a single trial of the c-RST consists of two phases: (1) "Listen" and (2) "Tap in Synchrony." In the graphical display, a giraffe with headphones is displayed on the computer screen. During the Listen phase, the giraffe's headphones are highlighted, indicating that the child should listen to the rhythm without tapping. During the Tap in Synchrony phase, the giraffe's hoof is highlighted, indicating that the child should tap along in synchrony with each note of the rhythm using the index finger of the right hand on a computer mouse. Each of the six rhythms is presented for three trials in a row, for a total of 18 trials. Before starting the test, children complete five practice trials at the Strongly Metric level, with feedback from the experimenter. The rhythms used for the practice trials are not those used in the main task. Performance on the RST is measured in two outcomes: (1) percent correct, or the child's ability to tap within the "scoring window" (as explained below); and (2) percent inter-tap interval (ITI) synchrony, or the child's ability to reproduce the temporal structure of a rhythm. The percent correct is calculated as the proportion of taps that fall within the scoring window (i.e., half the interval before and after the stimulus). The ITI synchrony is calculated as the ratio of the child's response intervals (r) to the stimulus time intervals (t), with the following formula: Score = 1-abs(r-t)/t. For both percent correct and ITI synchrony, proportions are multiplied by 100 to generate a percentage. Tapping and Continuation Task The Tapping and Continuation Task has been used in both adults and children to measure basic synchronization and timing abilities that do not differ between those with and without musical training (Aschersleben, 2002;Balasubramaniam et al., 2004;Whitall et al., 2008;Corriveau and Goswami, 2009;Matthews et al., 2016;Dalla Bella et al., 2017;Tryfon et al., 2017). The ability to synchronize to a beat has also been found to relate to general cognitive domains such as language and attention (Tierney A. T. and Kraus, 2013). Thus, the TCT may serve as an auditory-motor and cognitive Non-musicians (n = 83) 7 8 9 10 11 13 Age ( control task for the RST. For this task, children tap along with an isochronous rhythm of woodblock notes for 15 s (paced tapping), and are instructed to continue tapping at the same tempo for 15 s once the rhythm stops (non-paced tapping). The tapping task runs for six trials at the same tempo [inter-stimulus interval (ISI) of 500 ms]. Performance is measured in terms of tapping variability; paced and non-paced trials are scored separately. The ITIs and their respective standard deviations are averaged across all six trials for paced and non-paced tapping. The average SD is then divided by the average ITI to generate a coefficient of variation (i.e., the child's tapping variability relative to his or her own performance). Melody Discrimination Task For each trial of the MDT, participants listen to two melodies of equal duration separated by a 1.2-s silence, and then indicate whether the second melody is the same or different than the first. There are two conditions: Simple and Transposed. In the Simple condition, both melodies are in the same key. In the "different" trials, the pitch of a single note in the second melody is shifted up or down by up to five semitones, while preserving the contour of the first melody. The participant thus must compare individual pitches to detect the deviant note. In the Transposed condition, all the notes in the second melody are transposed upward by four semitones (a major third). In the "different" trials a single note is shifted up or down by one semitone, while preserving the contour of the first melody. Thus, the participant must use relative pitch to perceive the deviant note within a transposed model. All melodies in the MDT were composed of low-passfiltered isochronous harmonic tones (320 ms each, corresponding to a tempo of 93.75 bpm) from the Western major scale, using tones taken from the two octaves between C4-E6. All major scales are represented except B, F-sharp, and C-sharp; minor scales include E, A, and E-flat. The child version of the MDT (c-MDT; Figure 2) differs from the adult version in several ways. The adult version comprises 180 melodies (90 simple and 90 transposed), which range from 5 to 13 notes per melody. This was considered too long for testing with children so 60 items were selected (30 simple and 30 transposed) based on a reduced range of notes for lower difficulty (5-11 notes per melody). After this set of 60 items was administered to all children, we calculated item-level statistics post-hoc in order to retain a "best set" of data with the following criteria: (1) KR-20, or Cronbach's alpha for dichotomous items, of at least 0.50; (2) point-biserial correlation, or the degree to which items correlate with the total score for each condition, of at least 0.10; (3) item difficulty above chance; and (4) administration time under 20 min, including instructions and practice. The resulting best set is composed of 40 melodies, 20 per condition, with 5-11 notes per melody. The results reported in the current paper are for this best set. Raw score means and standard deviations for the 60-item set are provided for comparison in the Appendix in Supplementary Material. The Simple and Transposed conditions each have 20 trials, with an equal number of "same" and "different" trials per condition. Each condition is presented as two blocks of 10 trials with a break in between. The 20 trials are presented in random order within conditions, but the order of conditions is always the same (Simple, Transposed) to preserve the storyline. In the corresponding graphical display, children see a teacher elephant who "sings" a melody which is then repeated by either the "echoing elephant who sings it perfectly" or the "forgetful monkey who always makes a little mistake." In the graphical display for the Transposed condition, children are again shown the teacher elephant who sings the melody, which is repeated by the "baby elephant" or the "baby monkey" who "sing in a much higher voice" (i.e., in a transposed key); they are instructed to ignore this difference and instead listen for the "little mistake." Syllable Sequence Discrimination Task The Syllable Sequence Discrimination Task (SSDT) was designed as a baseline task for the MDT that would place similar demands on auditory working memory ability. In the c-SSDT the child hears two sequences of 5-8 non-word syllables, spoken in a monotone with F0 held constant, and judges whether they are the same or different. Syllables were generated using permutations of 7 consonants [f, k, n, p, r, s, y] and 4 vowel sounds [a, i, o, u], which were then selected for minimal semantic association (Foster and Zatorre, 2010a). The c-SSDT contains the following 13 phonemes: fah, foh, foo, kah, koh, nah, poh, rah, ree, roh, roo, sah, yah. Sequence lengths (5-8 syllables) were selected to match the adult version of the task. In the graphical display adapted for this task, the elephant and monkey are shown wearing robot helmets and are said to be "copying robot sounds, " with the same response cue as in the c-MDT ("echoing elephant" or "forgetful monkey"). For both the c-MDT and c-SSDT, children are familiarized through four practice trials with the experimenter watching. Feedback is provided on the first two of these practice trials to ensure the child understands the task. After all trials, the word "correct" or "incorrect" is displayed for 1 s. Experimenters are seated so as not see children's responses or feedback during experimental trials. Discrimination is scored as the percentage of correct responses. The child's responses are scored as 0 (incorrect) or 1 (correct), generating a proportion which is then multiplied by 100. Cognitive Tasks To assess cognitive abilities that might be related to performance on the music tasks we administered the Digit Span (DS), Letter-Number Sequencing (LNS), and Matrix Reasoning (MR) subtests from the Wechsler Intelligence Scale for Children, fourth edition (WISC-IV; Wechsler, 2003). Digit Span is a measure of immediate auditory memory, in which the child repeats strings of digits forward or backward. Letter-Number Sequencing (LNS) is a measure of auditory working memory and manipulation, in which the child hears a string of letters and numbers and must repeat them back in numerical and alphabetical order, respectively. Matrix Reasoning (MR) is a measure of non-verbal reasoning, and is considered to be a reliable estimate of general intellectual ability (Brody, 1992;Raven et al., 1998). For this task, the child must identify the missing portion of an incomplete visual matrix from one of five response options. All subtests were administered according to standardized procedures. Raw scores were converted to scaled scores based on age-based norms for all three subtests. The population-based mean for subtest scaled scores on the WISC-IV is 10, with a standard deviation of 3 (Wechsler, 2003). General Procedure Testing took place over a 1-h session. Participants were given short breaks between tasks to enhance motivation. Computerbased tasks were administered on a laptop computer running Presentation software (Neurobehavioral Systems, http://www. neurobs.com/). Auditory tasks were presented binaurally via Sony MDRZX100B headphones adjusted to a comfortable sound level. Musical tasks were administered before cognitive tasks, with musical task order (either c-RST or c-MDT first) counterbalanced across participants. Cognitive tasks were administered in the order in which they appear in the original WISC-IV battery. All programs for administration and scoring, as well as a user manual with norms, will be made available upon request to the first author. Sample Characteristics: Child Musicians and Non-musicians Data for group differences in the sample are presented in Table 2. We first conducted a chi-square analysis to determine whether the number of boys and girls differed between musicians and non-musicians. There were significantly more female musicians than males, and significantly more male non-musicians than females [χ 2 (1) = 5.89, p = 0.015]. Subsequently we carried out ANOVAs with musicianship and gender as between-subjects factors. For Simple melodies there was a small but statistically significant musicianship-by-gender interaction [F (1, 209) = 5.53, p = 0.02, partial η 2 = 0.03)], such that the difference between male musicians and non-musicians (20%) was greater than the difference between female musicians and non-musicians (12%). However, there were no such interactions for any other outcome variables of interest for either the c-RST or c-MDT. Thus, gender was not added as a covariate for group difference analyses. We conducted independent-sample t-tests, and calculated Hedge's g effect sizes, to examine the degree to which musicians and non-musicians differed in SES (estimated years of maternal education), cognitive variables including auditory working memory (Digit Span, LNS) and general intellectual ability (Matrix Reasoning), or performance on baseline tasks (Paced and Non-paced Tapping Variability, Syllable Sequence Discrimination). Cognitive data were lost for four children but as they represent less than 5% of the sample these scores were not replaced (Kline, 2011). Twelve musician's mothers and 10 nonmusician's mothers did not answer the question about maternal education. Reliability To examine internal-consistency reliability, we used Cronbach's alpha for the c-RST, which estimates the mean of all possible split-half reliabilities, and KR-20 for the c-MDT, equivalent to Cronbach's alpha for dichotomous variables. Reliability estimates were derived for musicians and non-musicians separately. Scores on the c-RST were found to be adequately reliable for musicians (α = 0.64) but slightly less so for non-musicians (α = 0.60). Score reliability is higher on the c-MDT and, similar to the c-RST, TCT, tapping and continuation task (baseline); c-SSDT, children's syllable sequence discrimination task (baseline); DS, digit span; LNS, letter-number sequencing; MR, matrix reasoning. Effects of Musicianship, Task, and Age To examine the degree to which performance on the c-RST and c-MDT varied between musicians and non-musicians, across levels of each task (e.g., rhythmic complexity and melody type), and between children of different age groups, we carried out mixed-design ANOVAs. We included musicianship (musician or non-musician) and age group (7,8,9,10,11,13) as betweensubjects factors, and task level as a repeated measure (c-RST: Strongly Metric, Medium Metric, Weakly Metric; c-MDT: Simple Melodies, Transposed Melodies). Outcome variables for the c-RST were percent correct and ITI synchrony; the outcome for the c-MDT was percent correct. Partial eta-squared effect sizes were calculated, and post-hoc analyses were carried out with Bonferroni corrections for multiple comparisons. For the c-RST (percent correct and ITI synchrony), the assumption of sphericity was violated such that the variances of the differences between levels of rhythmic complexity were not homogeneous (Mauchly's W = 0.94, p = 0.002 for both). Thus, degrees of freedom for all effects were corrected using Greenhouse-Geisser estimates ( ε = 0.94 for percent correct and 0.95 for ITI synchrony). Age-Equivalent Scores Given the main effects of age group for both the c-RST and c-MDT, we created age-equivalent (z-) scores for children on each task and their respective baseline tasks (c-TST and c-SSDT), using the formula z = (raw score-age group mean)/age group standard deviation. Means and standard deviations were derived from non-musicians (n = 83), who serve as the reference group with very little or no musical experience. Raw score means and standard deviations for musicians and non-musicians are presented in Table 3 (with the 40-item version of the c-MDT reported), and z-score conversions are provided in Table 4. Based on these, researchers using the c-RST or c-MDT with new groups of children can compare performance to either the trained or untrained sample. To examine the contribution of years of training to performance on the c-RST and c-MDT, we conducted hierarchical multiple regressions for all children with at least 1 year of lessons (n = 151; Tables 5-8). Outcome variables were z-scores for the c-RST (percent correct and ITI synchrony) and c-MDT (Simple and Transposed melodies). The predictor variable for all three analyses was duration of lessons in years. Scores for the two baseline variables (Non-paced Tapping Variability and Syllable Sequence Discrimination) were entered c-RST, children's rhythm synchronization task; TCT, tapping and continuation task (baseline); c-MDT, children's melody discrimination task; c-SSDT, children's syllable sequence discrimination task (baseline). at the first step, since these were statistically significantly better in musicians. For the c-RST-percent correct, the regression model with only baseline variables accounted for 4.5% of the variance and was statistically significant (adjusted R 2 = 0.05, p = 0.012). Additional years of training accounted for no additional variance (adjusted R 2 = 0.04, p = 0.943). For the c-RST-ITI synchrony, the model with only baseline variables was not statistically significant (adjusted R 2 = 0.001, p = 0.334). When years of lessons were added, these accounted for 5.6% of the variance and the model was significant (adjusted R 2 = 0.05; p = 0.003). Specifically, a one-year increase in lessons contributed to an increase of 0.24 standard deviations in ITI synchrony z-scores (β = 0.24, p = 0.003). This is equivalent to a raw-score increase of 1.5% in children without musical training. For the c-MDT-Simple melodies, the model with only baseline variables was statistically significant (adjusted R 2 = 0.04, p = 0.013), and additional years of training accounted for 5.2% additional variance (adjusted R 2 = 0.09, p = 0.004). Specifically, a one-year increase in lessons contributed to an increase of 0.23 standard deviations in Simple melody z-scores (β = 0.23, p = 0.004). This is equivalent to a raw-score increase of 2.5% in children without musical training. For the c-MDT-Transposed melodies, the model with only baseline variables was not statistically significant (adjusted R 2 = 0.02, p = 0.078). Additional years of training accounted for 11.7% additional variance (adjusted R 2 = 0.13, p < 0.001). Specifically, a one-year increase in lessons contributed to an increase of 0.35 standard deviations in Transposed melody z-scores (β = 0.35, p < 0.001). This is equivalent to a raw-score increase of 2.9% in children without musical training. Relation Between Musical and Cognitive Abilities To examine how musical and baseline tasks relate to cognitive task performance in musicians, we calculated bivariate correlations between age-corrected scores for the seven musical and baseline tasks (c-RST percent correct and ITI synchrony; c-RST, children's rhythm synchronization task; % corr, percent correct; ITI synch, inter-tap interval synchrony; Strong, Strongly metric rhythms; Medium, medium metric rhythms; Weak, weakly metric rhythms; TCT, tapping and continuation task; c-MDT, children's melody discrimination task; Simple, simple melodies; Transposed, transposed melodies; c-SSDT, children's syllable sequence discrimination task. TCT paced and non-paced tapping variability; c-MDT Simple and Transposed melodies; c-SSDT) and the three cognitive tasks (Digit Span, LNS, and Matrix Reasoning). Given the ample prior evidence that musical training and cognitive variables are positively correlated, bivariate correlations are reported at the one-tailed level of significance. Bonferroni corrections were applied to account for multiple correlations, with a resulting cutoff value of α = 0.002. Zero-order correlations are presented in Table 9. Accounting for multiple correlations, c-RST -percent correct was not significantly correlated with cognitive variables. In contrast, c-RST -ITI synchrony was significantly correlated with both working memory tasks, namely DS [r (130) DISCUSSION In the present study, we evaluated two tests of musical ability that were developed for school-age children (7-13 years of age), and present z-scores for groups with and without training. Our findings show that the c-RST and c-MDT are acceptably reliable, and that they are sensitive enough to demonstrate differences in performance between children with and without musical training, replicating findings from Step 2 Step 2 previous studies using the same tasks in adults. Overall, older children performed better than younger children. However, there were no discernible stepwise increases between age groups. Within-task performance also mirrored adult patterns, with scores decreasing across levels of metrical complexity for the rhythm task and better scores for the Simple compared to the Transposed conditions in the melody task. Using z-scores derived from the untrained sample, we found that music lessons significantly predicted task performance over and above baseline tasks. Finally, we found that, for musically-trained children, performance on rhythm synchronization and syllable sequence discrimination tasks was highly correlated with working memory abilities. When the c-RST and c-MDT were evaluated for internal consistency, both were found to be adequately reliable. However, reliability for the c-RST was lower than for the c-MDT. This difference likely reflects the smaller number of trials in the c-RST, but may also relate to having selected the "best set" of items on the c-MDT. Researchers using the 40-item c-MDT are therefore strongly encouraged to estimate their own internal-consistency reliability for comparison. We also found that reliability for both tasks was lower for children without musical training. These issues could be addressed by using psychometric techniques Step 2 Step 1 based in item response theory. For instance, future iterations of these tasks might include items that adapt to individual differences in ability, such that correct responding leads to more difficult items and vice-versa (Kline, 2011;Harrison et al., 2017). Finally, because these tasks do not assess all aspects of musical skill, we recommend that they be used in combination with other complementary measures previously used with children. For example, rhythm perception ability could be measured with a musical rhythm discrimination task (e.g., Gordon et al., 2015b). Melody production could be measured with a pitch-matching singing task (e.g., Hutchins and Peretz, 2012). In this child sample, musicians outperformed non-musicians on both musical tasks, consistent with findings from previous studies in adult musicians using the same tasks (Chen et al., 2008;Bailey and Penhune, 2010;Foster and Zatorre, 2010a;Karpati et al., 2016;Matthews et al., 2016). Moreover, the results are consistent with studies comparing children with and without training on other musical tasks (Hyde et al., 2009;Moreno et al., 2009;Roden et al., 2014b;Habibi et al., 2016). We also found the expected within-task effects in our child sample, such that raw scores decreased as task demands increased. For the c-RST, scores were lower as metric regularity (i.e., beat strength) decreased, consistent with previous studies using the Correlations are reported at the one-tailed level of significance, with Bonferroni corrections (*α < 0.002) for multiple correlations. c-RST: % corr (z), z-score for children's rhythm synchronization task, percent correct; c-RST: ITI synch (z), z-score for children's rhythm synchronization task, ITI synchrony; TCT: paced (z), z-score for tapping and continuation task, paced tapping variability (baseline);TCT: non-paced (z), z-score for tapping and continuation task, non-paced tapping variability (baseline); c-MDT: Simple (z), z-score for children's melody discrimination task, simple melodies; c-MDT: Transposed (z), z-score for children's melody discrimination task, transposed melodies; c-SSDT (z), z-score for children's syllable sequence discrimination task (baseline). RST with adults (Bailey and Penhune, 2010;Matthews et al., 2016). For the c-MDT, all children were better at detecting deviant melodies when presented in the same key rather than a transposed key, which is similar to previous studies with adults (Foster and Zatorre, 2010a,b). As predicted, the oldest children scored highest, the effect of age being strongest on the c-RST. This is supported by a previous finding using the same task (Tryfon et al., 2017), and by more general findings that children's rhythmic abilities improve with age and exposure to the music of their own culture (Trainor and Corrigal, 2010;Stalinski and Schellenberg, 2012). Despite this overall difference, scores did not increase consistently between age groups, especially for the c-MDT. This is similar to a recent large study which found that music perception ability did not increase as a function of age in Brazilian children (Barros et al., 2017). Taken together, this suggests a need to consider non-linear growth trajectories in childhood, such as the monotonic function which has been used to describe the development of musical expertise in adults (Ericsson et al., 1993). Using z-scores derived from children without musical training, we were able to successfully predict increases in musical task performance from additional years of lessons, over and above the influence of baseline variables (non-paced tapping and phonemic discrimination). For the c-RST, musical training predicted rhythm synchronization ability, over and above the influence of age and baseline variables. However, musicians in our sample do not score as high as adult non-musicians (e.g., Bailey and Penhune, 2012) until they have an average of 5 years of training (see Table 1). The neural substrates of auditory-motor integration develop across childhood, as demonstrated by cross-sectional studies showing that, without musical training, synchronization ability is on par with adult ability by late adolescence (Drake et al., 2000;Drewing et al., 2006;Savion-Lemieux et al., 2009). Thus, it appears that to perform at adult levels on the c-RST children should be at least 14 or have amassed at least 5 years of lessons. We also found that musically-trained children had less variability in non-paced timing than those without music lessons. This is consistent with adult studies using similar tasks (Repp, 2010;Baer et al., 2015). However, this apparent advantage for musicians appears only at ages 9 and 11 in our sample. This pattern is very similar to a much earlier study in which children with musical experience had lower tapping variability than nonmusicians, but only at 8 and 10 years old; there was no difference for the youngest or oldest age groups (Drake et al., 2000). According to Dynamic Attending Theory, the neural oscillations underlying auditory-motor synchronization stabilize as children get older (Drake et al., 2000). These bottom-up timing abilities, which are based in oscillatory entrainment and increase naturally as children get older, may be temporarily enhanced by musical experience in early or middle childhood. This experiencedependent boost in middle childhood may then decline as the underlying mechanisms mature through adolescence, for both musicians and non-musicians. Adult professional musicians, in turn, have the lowest tapping variability as a function of extended practice, the benefits of which extend far beyond the changes due to maturation. In contrast to rhythm synchronization, musical training was a strong predictor of improvement in melody discrimination ability, for both simple and transposed melodies. Transposition was especially sensitive to musical training, with the highest effect size for additional years of training on task performance. This is consistent with previous research showing that simple discrimination ability stabilizes in childhood (Stalinski and Schellenberg, 2012) whereas, without musical training, development of transposition discrimination is limited, with adolescents and adults performing at close-to-chance levels on this task (Foster and Zatorre, 2010b;Sutherland et al., 2013). Thus, the ability to detect changes in pitch within a transposed model may only develop fully in musically trained individuals. Quite unexpectedly, child musicians performed better on the baseline Syllable Sequence Discrimination Task (c-SSDT) than children without musical training. This is at odds with previous studies with adults where musically trained and untrained participants performed equally (Foster and Zatorre, 2010a;Karpati et al., 2016). On the other hand, it is possible that adults simply process linguistic material more automatically than children, even those with musical training. Thus, children's enhanced performance on the c-SSDT is consistent with a possible transfer effect from music training to language-related skills that is limited to childhood. In addition to enhancing bottom-up (sensory) discrimination thresholds, musical training affects multiple top-down cognitive processes that may contribute to enhancing performance on non-musical tasks, or far-transfer effects (Patel, 2012;Moreno and Bidelman, 2014). One such effect is improved phonological awareness, which is the first stage of learning to read and involves segmenting components of speech as they occur in time (Moreno et al., 2011;Moritz et al., 2013). The c-SSDT requires listening to a pair of syllable sequences and identifying whether one syllable has changed. This may tap into skills related to phonological awareness. Indeed, brief musical training has been found to increase linguistic abilities in young children (Moreno and Besson, 2006;Moreno et al., 2009). Moreover, children at risk of language delays who received 1 year of music lessons showed no decline in basic literacy skills relative to control subjects (Slater et al., 2014). Finally, we found that musician's z-scores for the c-RST and c-SSDT, but not the c-MDT or TCT, were strongly related to aspects of working memory. Correlations between rhythm synchronization and cognitive performance are consistent with other studies of far-transfer demonstrating a relationship between rhythm and language skills in children. For example, children with specific language impairments score poorly on rhythmic production tasks (Gordon et al., 2016) and tapping variability in adolescents is negatively correlated with reading skill (Tierney A. T. and Kraus, 2013). On the c-MDT we observed an interesting contrast such that, while not statistically significant, Simple melodies related more strongly to Digit Span, whereas Transposed melodies related more to LNS. This is likely because DS requires only immediate auditory memory and attention, whereas LNS requires mental manipulation and thus imposes a heavier demand on working memory and executive control. Although tentative, this may lend additional behavioral evidence to the hypothesis that transposition is distinct from other discrimination abilities (Foster and Zatorre, 2010a;Foster et al., 2013;Sutherland et al., 2013). Moreover, when considered with our regression results, this suggests that transposition relates to higher-order cognitive abilities that are especially sensitive to the impact of musical training in childhood. CONCLUSIONS In conclusion, this study demonstrates that we have been successful in developing age-based scores for two reliable and valid tests of musical skill for school-age children that are sensitive to the effects of training. These tasks and the associated z-scores fill an important need for researchers trying to assess the impact of music training in childhood. We hope that they will be important tools for researchers interested in evaluating the impact of musical training in longitudinal studies, those interested in comparing the efficacy of different training methods, and for those assessing the impact of training on non-musical abilities, such as reading skills and other cognitive functions. AUTHOR CONTRIBUTIONS KI was responsible for research design, data collection, data analysis, and writing; AP was responsible for data collection, writing, and editing; NF provided consultation for data analysis, and edited the manuscript; VP provided consultation for research design, data collection and data analysis, and edited the manuscript.
2018-04-05T13:07:07.828Z
2018-04-05T00:00:00.000
{ "year": 2018, "sha1": "e93637b01dffb32101d5a26cc51f7d3e7db8cc86", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00426/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e93637b01dffb32101d5a26cc51f7d3e7db8cc86", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
3555464
pes2o/s2orc
v3-fos-license
MicroRNA-379 inhibits the proliferation, migration and invasion of human osteosarcoma cells by targetting EIF4G2 Osteosarcoma (OS) is an aggressive malignant mesenchymal neoplasm amongst adolescents. The aim of the present study was to explore the various modes of action that miR-379 has on the proliferation, migration, and invasion of human OS cells. miR-379 achieves this by targetting eukaryotic initiation factor 4GII (EIF4G2). Human OS cell lines U2OS and MG-63 were selected and assigned into blank, miR-379 mimics, miR-379 mimic negative control (NC), miR-379 inhibitors, miR-379 inhibitor NC, EIF4G2 shRNA, control shRNA, and miR-379 inhibitor + EIF4G2 shRNA group. The miR-379 expression and EIF4G2 mRNA expression were detected utilising quantitative real-time PCR (qRT-PCR) and the EIF4G2 protein expression using Western blotting. MTT assay, scratch test, Transwell assay, and flow cytometry were performed to determine the proliferation, migration, invasion, and cell cycle, respectively. In comparison with the miR-379 mimic NC group, the miR-379 mimics group had decreased EIF4G2 expression; the miR-379 inhibitors group indicated an increased EIF4G2 expression. Compared with the control shRNA group, the EIF4G2 expression was lower in the EIF4G2 shRNA group and the miR-379 expression was dropped in the miR-379 inhibitor + EIF4G2 shRNA group. The proliferation, migration, and invasion abilities of OS cells were reduced in the miR-379 mimics and EIF4G2 shRNA groups. The percentage of OS cells at the G0/G1 stage was increased, and the percentage at the S-stage was decreased in the miR-379 mimics and EIF4G2 shRNA groups. miR-379 may inhibit the proliferation, migration and invasion of OS cells through the down-regulation of EIF4G2. Introduction Osteosarcoma (OS) is the most common primary malignancy and is derived from the primitive bone-forming mesenchymal cells in the long bones [1]. OS commonly occurs in adolescents between the ages of 10 and 20, while accounting for 8.9% of cancer deaths amongst children and adolescents in the United States alone. There appears to be no significant difference in gender, however, the incidence of OS varies with ethnicity [2]. Currently, the most common treatment available for OS is preand post-operational chemotherapy in association with surgical treatment [3]. Unfortunately, despite advancements in the diagnosis and treatment of OS, the overall survival rate amongst patients has dithered and remained relatively constant since the mid 1980s [4]. Universal common risk factors linked to OS development include ionising radiation, alkylating agents, Paget's disease, hereditary retinoblastoma, the Li-Fraumeni familial cancer syndrome and other chromosomal abnormalities [2]. Recently, genetic aberrations have received increasing recognition as a key factor in the etiology of OS [2]. miRNAs (miRs) are a class of small non-coding (18-24 nts) RNAs [5]. miRs regulate gene expression by inducing mRNA degradation and/or repressing translation and therefore participate in a number of biological processes, such as development, cell proliferation, differentiation, and apoptosis [1,6]. miR-664 promotes the proliferation of OS cells through the down-regulation of FOXO4 [7]. Down-regulated miR-125a-5p suppresses OS by targetting MMP-11 [8]. It has been shown that miR-155-5p and miR-148a-3p are species conserved deregulated miR in OS [9]. miR-379 is located on chromosome 14q32.31 and belongs to the δ-like 1 homolog-deiodinase, iodothyronine 3 (DLK1-DIO3) clusters [10,11]. The DLK1-DIO3 miR clusters have been reported to play a critical role in regulating tumor growth and metastasis as well as driving tumor progression [12]. In addition, members of the clusters, miR-379, miR-409-3p/5p, and miR-154* have previously been extensively evaluated and have been shown to have a correlation to bone metastasis in prostate cancer [11,13]. A recent study highlighted that miR-409-3p may play a role in the inhibition of OS cell invasion and migration through targetting catenin-δ1 [14]. Limited studies have investigated the role of miR-379 in OS and its underlying molecular mechanisms. Consequently, the hypothesis of the present study relates to emphasising the role of miR-379 in OS. During the present study, an investigation into the effects of miR-379 was made. The effects of miR-379 on the proliferation, migration, and invasion of OS cells, through targetting eukaryotic initiation factor 4GII (EIF4G2) was explored in depth. This was done in addition to its under examined role in the development of OS. Materials and methods Cell recovery, culture, and grouping The human OS cell lines U2OS and MG-63 were purchased from the Institute of Biochemistry and Cell Biology (Shanghai, China). The cell suspension and Dulbecco's minimum essential medium (DMEM) were lightly mixed, followed by centrifugation for a 5-min period at 1000 rpm. After discarding the supernatant, the cells were resuspended in DMEM (5 ml) containing 10% FBS and transferred into a T25 culture flask. This was then placed in an incubator at 37 • C with 5% CO 2 . According to the growth condition, the culture medium was replaced 2-3 days later. The cells were subcultured after reaching 80-90% confluence. After discarding the medium, the cells were washed twice with PBS, digested for 2-5 min with 0.25% trypsin, suspended in DMEM (5 ml) containing 10% FBS and passaged at a ratio of 1:2-3. The U2OS and MG-63 cells were divided into the blank group (transfected with blank plasmids), the miR-379 mimics group (transfected with miR-379 mimics), the miR-379 mimic NC group (transfected with miR-379 mimic negative control (NC)), the miR-379 inhibitors group (transfected with miR-379 inhibitors), the miR-379 inhibitor NC group (transfected with miR-379 inhibitors NC), the EIF4G2 shRNA group (transfected with EIF4G2 shRNA), the control shRNA group (transfected with control shRNA) and the miR-379 inhibitor + EIF4G2 shRNA group (transfected with miR-379 inhibitors and EIF4G2 shRNA), of which the plasmid sequences were shown in Table 1. Cell transfection The U2OS and MG-63 cells in the logarithmic growth phase were cultured in a 24-well plate with 2 × 6 5 cells per well overnight. When the density of cells reached 70-90%, 0.8 g plasmids were added into 50 μl Opti-MEM and 2 μl Lipofectamine 2000 was added into another 50 μl Opti-MEM. After 5 min at room temperature, the two compounds were mixed and incubated for 20 min. Then the mixture was added into a 24-well plate and the medium was replaced after transfection for 4-6 h. The experiment in each group was repeated three times. Quantitative real-time polymerase chain reaction After a 24-h period of transfection, the RNA of the transfected U2OS and MG-63 cells was extracted using TRIzol. UV spectrophotometer was used to determine the purity and concentration of the extracted RNA, and agarose gel electrophoresis was used to detect the completeness of the extracted RNA. A Primescript TM RT reagent kit (TaKaRa Biotechnology Ltd., Dalian, China) was used for reverse transcription and a SYBR R Premix Ex Taq TM quantitative Table 2. The Opticon Monitor 3 software (Bio-Rad Laboratories, Inc. CA, U.S.A.) was used to analyze the results of the PCR. The lowest point of the parallel rising logarithmic amplification curve was selected manually, as the threshold value and the cycle threshold (C t ) for each reaction tube was obtained. Data were analyzed using the 2 − C t methods, referring to the odds ratio (OR) of the target gene expression, between the experimental group and the control group. The formula was as follows: C t = (C t target gene -C t the reference gene ) experimental group -(C t target gene -C t reference gene ) control group . The experiment in each group was repeated three times. The comparison amongst the groups was analyzed by t test. Western blotting After 24 h of transfection, cells were scraped from the ice, centrifuged at 3000 rpm at 4 • C, added with RIPA (1:5) for proteolytic cleavage and placed in a refrigerator at 4 • C for 1 h. Followed by centrifugation at 10000×g for 30 min in a refrigerated centrifuge, the supernatant was transferred to a new EP tube and BCA kit (Univ-bio, Shanghai, China) was used to detect the concentration of proteins. With SDS/PAGE at 70 V for 120 min, the total protein (50 μg) was transferred to PVDF membrane. After blocking by using 5% skimmed milk at room temperature for 1. The images were scanned after development using ECL reagent. The gray values were analyzed by Image software. The relative expression of proteins resulted in the gray value of the target protein/the gray value of reference protein. The experiment in each group was repeated three times. The comparison amongst the groups was statically analyzed using a t test. Dual luciferase reporter gene assay The biological prediction website microRNA.org was used to analyze target genes of miR-379 and also to verify whether EIF4G2 was a direct target gene of miR-379. After cloning and amplifying the full length of EIF4G2 in 3 -UTR region, the PCR products were cloned into multiple cloning sites of pmirGLO luciferase (Promega Corp., Madison, WI, U.S.A.), which was named pEIF4G2-Wt. Site-specific mutagenesis was performed to alter the binding site of miR-379, which was predicted by bioinformatics, followed by construction of the pEIF4G2-Wt vector. The number of cells and transfection efficiency were normalized using pRL-TK (TaKaRa Biotechnology Ltd., Dalian, China) as an internal reference, which expressed Renilla luciferase. miR-379 mimics and miR-379 mimic NC were respectively cotransfected with the luciferase reporter vector into U2OS and MG-63 cells, followed by dual luciferase activity detection according to the instructions given by Promega. The experiment in each group was repeated three times. The comparison among the groups was analyzed by t test. MTT assay Cell viability curves were drawn using MTT assay to measure the proliferation of the transfected U2OS and MG-63 cells. After 48 h of transfection, cells in each group were counted and then inoculated into four 96-well plates at a density of 2 × 10 2 cells per 200 μl with eight repeated wells. Four time points were chosen: 0, 24, 48, and 72 h, and further experiments were performed at each time point. The MTT solution (20 μl) was added to each well for 4 h of incubation at 37 • C, the incubation was then terminated and the culture supernatant was discarded. DMSO (150 μl) (Sigma, Englewood Cliffs, NJ, U.S.A.) was added to each well, and the plates were gently shaken for 10 min in an enzyme-linked immunosorbent detector. The absorbance (OD) values of each well were determined at a wavelength of 490 nm at each time point. Cell viability curves were generated with time as the x-axis and the OD value as the y-axis. The experiment in each group was repeated three times. The comparison amongst the groups was analyzed by t test. Scratch test The cells were digested and the cell concentration was adjusted to 5 × 10 5 /ml. The cell suspension (100 μl) was added to a 24-well culture plate for conventional incubation until the formation of a cell monolayer. The scratch test was then performed. After washing once, the culture medium was replaced by RPMI 1640 BSA and 1% FBS, followed by measuring the distance of the scratch area under a microscope. After incubation for 24 h, the cells were cultured for another 24 h in RPMI 1640 culture medium with 10% FBS, followed by measuring the relative distance that the cells migrated to the injured area. Cell migration distance = scratch distance at the beginning of the experiment -scratch distance at the end of the experiment. The experiment in each group was repeated three times. The comparison amongst the groups was analyzed by t test. Transwell assay Following transfection for 24 h, the cells were starved for 12 h and then digested in a serum-free medium, followed by washing twice with PBS and suspension in the serum-free medium Opti-MEMI (Invitrogen Inc., Carlsbad, CA, U.S.A.) with 10 g/l BSA. The cell density was adjusted to 3 × 10 4 cells/ml. The experiment was performed in 24-well 8 μm Transwell plates (Corning-Costar, Corning, NY, U.S.A.) (three chambers per group; 100 μl cell suspension per chamber). The lower chamber, containing 600 μl 10% RPMI 1640 medium, was incubated in 5% CO 2 at 37 • C. Twenty-four hours later later, the cells were fixed with 4% paraformaldehyde for 30 min and 0.2% Triton X-100 (Sigma Company, St. Louis, MO, U.S.A.) solution was added to the chambers for 15 min, followed by 0.05% Gentian Violet staining for 5 min. Before the assay, 50 μl Matrigel (Sigma Company, St. Louis, MO, U.S.A.) was added to the chambers, and 48 h later, the chambers were fixed and stained using the method described above. The number of stained cells was counted under an inverted microscope. Five fields were randomly selected and the number of cells was represented as the mean. The experiment in each group was repeated three times. The comparisons amongst the groups were analyzed using a t test. Flow cytometry The cells were transfected and cultured for 24 h. The cells were then washed with PBS solution once, after the culture solution was discarded. The cells were then digested by 0.25% trypsin solution, and the digestion liquid was discarded after the cells were contracted and turned around, which was observed under a microscope, followed by the addition of culture solution containing serum to help stop the digestion. Methodically pipetting, the cells were gently detached from the wall and mixed into cell suspension, which was then centrifuged (1000 rpm) for 5 min with a supernatant sucking technique. The cells were washed with PBS solution twice, and then fixed for 30 min by adding pre-cooled 70% ethanol. The cells were centrifuged and collected, washed with phosphate buffer, stained by 1% Propidium Iodide (PI) containing RNA enzyme for 30 min and washed with PBS solution twice for the elimination of PI. The volume was adjusted to 1 ml using PBS solution. The cell cycle was detected using a BD-Aria FACS Calibur flow cytometry, with three samples in each group. Detection was repeated three times uniformly. The comparison amongst the groups was analyzed using t test. Statistical analysis SPSS 21.0 software (SPSS Inc., Chicago, IL, U.S.A.) was used for statistical analysis. Counting data were represented as rates and/or percentages. Comparisons were performed using a chi-square test. Measurement data were represented as mean + − S.D., and comparisons between the two groups were performed using a t test. Comparisons in multiple groups were performed using one-way ANOVA (variance homogeneity was tested before analysis). Comparisons between the two means were verified using the least significant difference t (LSD-t) test. A two-tailed P-value less than 0.05 was considered statistically significant. miR-379 down-regulated EIF4G2 expression in U2OS and MG-63 cell lines The results collected from qRT-PCR as well as Western Blotting ( Figure 1) indicated that in the U2OS and MG-63 cell lines, no significant differences were found in the expressions of miR-379 and EIF4G2 amongst the miR-379 mimic NC, miR-379 inhibitor NC, control shRNA and blank groups (all P>0.05). In comparison with the miR-379 mimic NC group, the miR-379 mimics group showcased a considerably increased rate if miR-379 expression and decreased EIF4G2 expression (both P<0.05), while the miR-379 inhibitors group, had decreased miR-379 expression and increased EIF4G2 expression in relation to the miR-379 inhibitor NC group (both P<0.05). In comparison with the control shRNA group, the EIF4G2 expression was lower in the EIF4G2 shRNA group (P<0.05) There was no significant difference between miR-379 expression (P>0.05) The miR-379 expression was dropped in the miR-379 inhibitor + EIF4G2 shRNA group (P<0.05) and there was no significant difference in EIF4G2 expression (P>0.05). The correlation analysis indicated that miR-379 was negatively correlated with regard to EIF4G2. miR-379 targetted EIF4G2 A biological prediction website (microRNA.org) showed that miR-379 could target EIF4G2 (Figure 2A). To confirm that EIF4G2 was a direct target gene of miR-379, the luciferase reporter vectors pEIF4G2-Wt and pEIF4G2-Mut were constructed using the EIF4G2 3 -UTR. The dual luciferase reporter assay showed that in U2OS cells, the luciferase activity in the miR-379 mimics + pEIF4G2-wt group decreased by approximately 42% compared with that in the miR-379 mimic NC group (all P<0.05) ( Figure 2B), while the luciferase activity in the pEIF4G2-mut cells showed no difference between the miR-379 mimics group and the miR-379 mimic NC group (both P>0.05). In MG-63 cells, the luciferase activity in the miR-379 mimics + pEIF4G2-wt group decreased by approximately 54% compared with that in the miR-379 mimic NC group (all P<0.05) ( Figure 2B), while the luciferase activity in the pEIF4G2-mut cells showed no difference between the miR-379 mimics group and the miR-379 mimic NC group (both P>0.05). Therefore, EIF4G2 was a potential target gene of miR-379, and miR-379 could target and negatively regulate EIF4G2. miR-379 decreased the proliferation of U2OS and MG-63 cells Cell viability curves were drawn using the MTT method to determine the proliferation of the transfected U2OS and MG-63 cells. The OD values measured in the transfected cells were summarized in Figure 3A,B. The results showed that the U2OS and MG-63 cells transfected with miR-379 mimics had a significantly slower growth rate than those transfected with miR-379 mimic NC (all P<0.05). The growth rates of the cells in the blank, miR-379 mimic NC, miR-379 inhibitor NC, control shRNA and miR-379 inhibitor + EIF4G2 shRNA groups were not significantly different (all P>0.05), while the growth rate of the cells was higher in the miR-379 inhibitors group than that in the miR-379 inhibitor NC group (all P<0.05). Compared with the control shRNA group, the growth rate of the cells was lower in the EIF4G2 shRNA group (all P<0.05). There were no significant differences amongst the miR-379 mimic NC, miR-379 inhibitor NC, control shRNA and miR-379 inhibitor + EIF4G2 shRNA and blank groups (all P>0.05). In comparison with the miR-379 mimic NC group, the miR-379 mimics group showed a significantly reduced migration and invasive ability (all P<0.05), while the miR-379 inhibitors group showed significantly increased migration and invasion than the miR-379 inhibitor NC group (all P<0.05). Compared with the control shRNA group, the migration and invasive ability of the cells were reduced in the EIF4G2 shRNA group (P<0.05). the miR-379 inhibitor NC group, the percentage of OS cells at the G 0 /G 1 -phase in the miR-379 inhibitors group had an obvious decrease, while the percentage of OS cells at the S-stage had an evident increase (all P<0.05). Compared with the control shRNA group, the percentage of OS cells at the G 0 /G 1 -phase increased, while the percentage of OS cells at the S-stage decreased in the EIF4G2 shRNA group (all P<0.05). Discussion OS is the most common, primary malignancy with regard to bone [15]. Several studies have reported the significance of miRs in relation to OS including miR-101, miR-26b and miR-30a [16][17][18]. In the present study, it was established that miR-379 exhibited a low level of expression in U2OS and MG-63 cell lines. In addition, inhibition in proliferation, migration and invasion of OS cells was observed when EIF4G2 was targetted. Observations made during the present study indicated that miR-379 was significantly down-regulated in OS cells and could inhibit the proliferation, migration and invasion of OS cells. In relation to previous reports and scientific literature concerning epithelial-to-mesenchymal transition (EMT), MiRs have been shown to regulate EMT and their effect on transcription factors and signaling pathways [11,19,20]. EMT is a process involving the transformation of epithelial cells into mesenchymal cells, featuring a loss of cell-cell adhesion and the acquisition of increased migratory and invasive capabilities [21]. EMT occurs in most tumorigenic cells, leading to increased invasion and metastasis in tumors [22,23]. Cancer cells usually undergo EMT before metastasis to distant organs and activate embryonic programs and pathways, which partially involve maintaining stem cell like characteristics [11]. Studies have noted the significant role of miRs in metastasis and cancer stem cell formation [9,24]. The expression of miR-409-3p, which belongs to the same cluster as miR-379 does, is also down-regulated in OS. It is also associated with OS metastasis in suppressing OS cell migration and invasion [14]. miR-379 can also regulate cyclin B1 expression and is decreased in breast cancer [25]. It has also been reported that miR-379-5p could inhibit migration as well as invasion in hepatocellular carcinoma cells by targetting FAK/AKT signaling pathways [10]. The most significant conclusion drawn from the present study was that miR-379 could potentially act as a suppressor of OS. This could be showcased by directly targetting EIF4G2. Intriguingly, as our study revealed, the mRNA expression of EIF4G2 was significantly down-regulated in the miR-379 mimics group and up-regulated in the miR-379 inhibitors group. This highlighted the negative correlation in relation to the expression rate of miR-379. The dual luciferase reporter gene assay further confirmed that EIF4G2 was a target gene of miR-379. In addition, miR-379 was found to significantly inhibit the proliferation, migration and invasion of OS cells. Translation of most mRNAs is regulated at the level of initiation, a process requiring the protein complex known as eukaryotic initiation factor 4F (EIF4F), which includes the cap-binding protein EIF4E, the scaffolding protein eukaryotic translation initiation factor 4 γ (EIF4G) and the ATP-dependent RNA helicase EIF4A [26,27]. The eukaryotic translation initiation factor 4, γ (EIF4G) is expressed in mammalian cells in two forms, namely, EIF4G1 and EIF4G2 [28,29]. Accumulating evidence has shown that disrupted translational machinery strongly contributes to cancer development and progression [27,30]. Translational deregulation occurs through modification of the expressions of proteins involved in translational initiation and/or changes in miR expression [31][32][33]. Recently, several studies have focussed on exploring the mechanisms underlying translational regulation via miRs [34,35]. Mazan-Mamczarz et al. [27] demonstrated that the down-regulation of EIF4G2 by siRNA decreased translation and cell proliferation and induced cellular senescence. Consequently, we hypothesized that the effect of miR-379 on OS progression may be due to its targetting EIF4G2. One of the major limitations of the present study is the use of transient transfection of miR-379 instead of stable expression in OS cell lines. Furthermore, knockdown miR-379 in other non-cancerous osteoblastic cell lines should further substantiate its tumor-suppressing role in OS. To conclude, we established that the expression of miR-379 was down-regulated in OS cell lines. In addition to this, the overexpression of miR-379 was found to suppress OS cell proliferation, migration, invasion and tumour growth. The function of miR-379 was mediated by the down-regulation of EIF4G2. The data collected and evaluated suggest that miR-379 down-regulation may play various crucial roles in the development of OS. Thus, miR-379 may serve as an effective therapeutic focus in inhibiting OS progression. Further studies are needed to investigate the clear roles that various miRs and their target genes play, in the development and progression of OS.
2018-04-03T01:50:33.088Z
2017-04-05T00:00:00.000
{ "year": 2017, "sha1": "d7e5680458b70cde124ad356ac5b13522d352b7f", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/bioscirep/article-pdf/37/3/BSR20160542/431011/bsr-2016-0542.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7e5680458b70cde124ad356ac5b13522d352b7f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
110117169
pes2o/s2orc
v3-fos-license
Stress Analysis and Research of a Cyclone in CAGG Units The cyclone is one of the key equipment in the high-pressure (2.0~3.0 MPa) Coal Ash-Agglomerated Gasification (CAGG) units. Based on the stress classification method and the design-by-analysis rule, using numerical simulation and ANSYS software, calculation and analysis have been carried out on a cyclone which has rectangular vortex inlet and flat head. The comparative studies have been made for those which have different quantities of stiffened plates. Conclusion can be drawn that the application of stiffened plates is a good option to improve top head stress distribution. INTRODUCTION Coal gasification is a technology of producing gas fuels and gas materials (CO+H 2 ) by using solid coals.CAGG is a new-developed gasification process.In this process, the pulverized coal will be gasified in an ashagglomerated fluidized bed gasifier. Nowadays, the operating pressure is about 0.2~0.6MPa in the industrial application of CAGG units.To meet higher capacity requirements of future commercial units, it is essential that the operating pressure must be raised to a higher pressure of 2.0~3.0MPa.For a same size gasifier, the capacity can raise from 200~300 tons/day to 1000~1500 tons/day after raising the operating pressure.The cyclone is one of the key equipment in the higher-pressure CAGG units.The traditional typical cyclone has the structure of rectangular vortex inlet and flat head.Those structures are not reasonable for good stress distribution from the opinion of pressure design.Normally, the flat head should be used while the vessel diameter is small (less than 500 mm) and the operating pressure is low (less than 1.0 MPa).In the future higher-pressure CAGG units, the cyclone diameter will be larger up to 2000 mm.The operating pressure will be higher up to 3.0 MPa and the operating temperature will be 1100 °C.Evidently, the engineering design of high-pressure and high-temperature cyclone is an inevitable challenging work. To deal with the issues of the traditional cyclone, Sun et al. (2006) put forward a new-type cyclone with vault top, eccentric vortex finder and straight cut-in round inlet.The stress distribution of new-type cyclones is improved to some extends.On the other hand, the refractory lining installation will be a more difficult thing.It is also a big problem that different thickness lining will cause different temperature distribution.In fact, it is difficult to remain the same separation efficiency for the new-type cyclone compare with the traditional cyclone. On the base of remaining the traditional structure, the cyclone dimensions are optimized according to the media properties and the operating conditions in CAGG units.It is essential to ensure the excellent separation performance.Secondly, the cyclone stress distribution is also optimized by using stiffened plates.The newdeveloped cyclone with stiffened plates will meet the high-pressure requirements.Because of remaining rectangular vortex inlet and flat head, the keeping simple structures make it easy to manufacture metal shells and install refractory. Cyclones are thin-walled pressure vessels and have similar structure with the conjunction of thin flat cover and thin cylindrical shell.What's more, the rectangular vortex inlet makes the top head structure more complex and the stress distribution worse.For a single thinwalled element, membrane shell theory can be used for stress analysis.Large shear stress and large bend moment caused by interactive constraints will appear in the flat cover and cylindrical shell conjunction component.Especially the boundary stress is huge.Membrane shell theory is inapplicable in such a condition.Additional bending moment and additional shear stress can be soluble by using plate-shell elastic mechanics theory and related simultaneous equation (Wang, 2011).Although are usually used in the industrial applications, many methods of design-byrules are not accurate enough and not suitable for the flat head (ASME, 2010a;Dennis, 2004;Farr and Jawad, 2010;Warren and Richard, 2011).Furthermore, Restrictions and loads: Full restraints are applied on the upper end and free restraints are applied on the lower end.Internal pressure of 3.0 MPa is applied. Influence of having and without stiffened plates on the top cover: The Von Mises equivalent stress distributions of having and without stiffened plates are shown in Fig. 2 and 3.The calculation results show that very high stress exists at the internal and external edge for stress concentration.It is a good phenomenon that the stress reduces greatly after applying stiffened plate. The external and internal edge stress distributions of upper cover with and without stiffened plates are shown in Fig. 4 and 5. According to the numerical simulation, the stress value of the inlet top external edge is reduced by 18.5% after applying stiffened plates.While the stress value of the circular middle external edge has a 70% drop.The In accordance with the simulation, thicker stiffened plates will decrease the stress value.The maximum stress changes from 374.7 MPa to 351.8 MPa while the stiffened plate thickness changes from 18 mm to 52 mm.The change range is not as enormous as that caused by plate quantity.It is unnecessary to design the plate thickness bigger than 42 mm in view of the manufacture cost. CONCLUSION • The traditional rectangular vortex inlet and flat head will be remained in the new-developed CAGG cyclones.In accordance with numerical simulation and analysis, application of stiffened plates is good for top head stress distribution.In the stiffened plates design, it is essential to optimize plate quantity and plate thickness.• Adjusting the stiffened plate quantity is more effective and more economic than adjusting the stiffened plate thickness.• The recommended quantity of stiffened plates is 8~16.The recommended thickness of stiffened plate is 30 mm. Fig. 1 : Fig. 1: Cyclone geometrical model the flat head of a cyclone is not a circular flat plate and the vessel is not a regular cylindrical shell.It is difficult to get solutions by using elastic mechanic theory.By the aid of numerical simulation technology and ANSYS software, the stress classification method and the design-by-analysis rule (ASME, 2010b) are employed in detailed stress analysis.The dimensions are optimized and the reasonable stiffen plates will be thought about in the new design.FINITE ELEMENT MODEL Geometrical model: The geometrical model of a typical cyclone, which has nominal diameter 2000 mm and length approximately 10000 mm, is established as shown in Fig. 1.Table1lists the dimensions of the main components used in the numerical simulations.Element type: In this study, the cyclone is modeled by shell 63 elements.Shell 63 is an elastic shell element.It Fig. 2 : Fig. 2: Stress distribution of upper cover without stiffened plate Fig. 3 : Fig. 3: Stress distribution of upper cover with stiffened plate Table 1 : Dimensions of components used in the numerical simulations
2018-12-29T12:22:02.563Z
2013-04-10T00:00:00.000
{ "year": 2013, "sha1": "3f96b9b2909245700e8f4d183d061a49296ff0b4", "oa_license": "CCBY", "oa_url": "https://www.maxwellsci.com/announce/RJASET/5-3450-3456.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "ad7f8722df4f3e4f3d5291510d5e3b283e335756", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
247462038
pes2o/s2orc
v3-fos-license
p53 directly downregulates the expression of CDC20 to exert anti-tumor activity in mantle cell lymphoma Background Cell cycle dysregulation characterized by cyclin D1 overexpression is common in mantle cell lymphoma (MCL), while mitotic disorder was less studied. Cell division cycle 20 homologue (CDC20), an essential mitotic regulator, was highly expressed in various tumors. Another common abnormality in MCL is p53 inactivation. Little was known about the role of CDC20 in MCL tumorigenesis and the regulatory relationship between p53 and CDC20 in MCL. Methods CDC20 expression was detected in MCL patients and MCL cell lines harboring mutant p53 (Jeko and Mino cells) and wild-type p53 (Z138 and JVM2 cells). Z138 and JVM2 cells were treated with CDC20 inhibitor apcin, p53 agonist nutlin-3a, or in combination, and then cell proliferation, cell apoptosis, cell cycle, cell migration and invasion were determined by CCK-8, flow cytometry and Transwell assays. The regulatory mechanism between p53 and CDC20 was revealed by dual-luciferase reporter gene assay and CUT&Tag technology. The anti-tumor effect, safety and tolerability of nutlin-3a and apcin were investigated in vivo in the Z138-driven xenograft tumor model. Results CDC20 was overexpressed in MCL patients and cell lines compared with their respective controls. The typical immunohistochemical marker of MCL patients, cyclin D1, was positively correlated with CDC20 expression. CDC20 high expression indicated unfavorable clinicopathological features and poor prognosis in MCL patients. In Z138 and JVM2 cells, either apcin or nutlin-3a treatment could inhibit cell proliferation, migration and invasion, and induce cell apoptosis and cell cycle arrest. GEO analysis, RT-qPCR and WB results showed that p53 expression was negatively correlated with CDC20 expression in MCL patients, Z138 and JVM2 cells, while this relationship was not observed in p53-mutant cells. Dual-luciferase reporter gene assay and CUT&Tag assay revealed mechanistically that CDC20 was transcriptionally repressed by p53 through directly binding p53 to CDC20 promoter from − 492 to + 101 bp. Moreover, combined treatment of nutlin-3a and apcin showed better anti-tumor effect than single treatment in Z138 and JVM2 cells. Administration of nutlin-3a/apcin alone or in combination confirmed their efficacy and safety in tumor-bearing mice. Conclusions Our study validates the essential role of p53 and CDC20 in MCL tumorigenesis, and provides a new insight for MCL therapeutics through dual-targeting p53 and CDC20. Introduction Mantle cell lymphoma (MCL) is a rare, aggressive and heterogeneous B-cell non-Hodgkin's lymphoma, with the incidence accounting for about 6% of all non-Hodgkin's lymphoma and a median age of 68 years old at initial diagnosis [1,2]. The clinical presentations of MCL are diverse and there are no curative methods up to date, as most MCL patients relapse shortly after initial standard chemotherapy [3]. Although targeted drugs such as Bruton's tyrosine kinase (BTK) inhibitor [4,5], phosphoinositide 3-kinase (PI3K) inhibitors [6,7], B cell lymphoma-2 (BCL2) inhibitors [8][9][10] and Poly ADP-ribose polymerase (PARP) inhibitors [11] are developed for the treatment of relapsed/refractory MCL, the prognosis of MCL patients still remains poor. Further studies on MCL pathogenesis are conducive to discovering novel smallmolecular agents for improving the clinical outcomes of MCL patients. Cell cycle dysregulation is common in MCL, manifested by the accumulation of aberrant cyclin D1 caused by chromosomal translocation t(11;14)(q13;q32), thus accelerating the transition from G1 phase to S phase and promoting malignant B cell proliferation [12,13]. However, limited studies focused on mitotic disorder in MCL. Effective anti-tumor strategies that targeting mitosis were reported, which led to mitotic cell arrest and induced mitotic cell death [14]. Cell division cycle 20 homologue (CDC20), an essential mitotic regulator, could bind to the anaphase-promoting complex/cyclosome (APC/C) and activate APC/C to ensure chromosome segregation and facilitate metaphase-anaphase transition, followed by mitotic exit [15]. Plenty of studies have proved that CDC20 functioned as the carcinogenic factor and was highly expressed in a variety of solid tumors and hematological malignancies, including breast cancer [16], lung cancer [17], gastric cancer [18], hepatocellular carcinoma [19], colorectal cancer [20], pancreatic cancer [21], bladder cancer [22], oral squamous cell carcinoma [23], ovarian cancer [24], glioblastoma [25], multiple myeloma [26] and diffuse large B-cell lymphoma [27]. CDC20 overexpression was also closely related to poor pathological classification and shorter survival in these tumor patients, suggesting that it might serve as an important prognostic factor and a potential anti-tumor target. Currently, few studies have explored the role of CDC20 in the development and progression of MCL. In addition to cell cycle dysregulation, another common abnormality in MCL is p53 inactivation caused by p53 deletion or mutation. As a tumor suppressor, p53 is activated when undergoing endogenous and exogenous cellular stress, and can prevent normal cells from becoming cancerous by triggering cell cycle arrest, apoptosis, cellular senescence or autophagy [28,29]. Approximately 30% of MCL patients suffered from p53 inactivation, and these patients usually had adverse clinical outcomes [12]. Since p53 inactivation was mediated by MDM2 overexpression in some MCL patients, one way to reactivate p53 was to inhibit MDM2, which was a negative regulator of p53 [30]. Besides, targeted genes that transcriptionally repressed by p53 were frequently overexpressed in tumors [31], and Zhang et al. found p53 increased accompanied with CDC20 decreased after drug treatment in triple-negative breast cancer [32], we hypothesized that a negative regulatory relationship between p53 and CDC20 also existed in MCL. In this study, we aimed to explore the role of p53 and CDC20 in MCL, demonstrate the specific mechanism on how p53 regulated CDC20, and verify the efficacy and safety of anti-MCL therapy by targeting p53 and CDC20 in vitro and in vivo. The relationship between CDC20 expression level and the clinicopathological features and prognosis of MCL patients was investigated. CDC20 inhibitor apcin and p53 agonist nutlin-3a were used to confirm the effect of inhibiting CDC20 or activating p53 on cell proliferation, apoptosis, cell cycle, migration and invasion abilities of MCL cells. Dual-luciferase reporter gene assay and CUT&Tag technology were performed to clarify the regulatory mechanism between p53 and CDC20. Most importantly, we attempted to prove the combined anti-tumor effect of nutlin-3a and apcin in MCL cells and MCL xenograft model, so as to provide the initial evidence for the clinical feasibility of dual-targeting p53 and CDC20 in MCL therapeutics. Acquisition of clinical samples Peripheral blood from 24 MCL patients and 7 healthy controls as well as bone marrow from 17 MCL patients and 10 healthy donors were collected at Peking University Third Hospital from December 2020 to September 2021. Peripheral blood mononuclear cells (PBMCs) and bone marrow mononuclear cells (BMNCs) were extracted from peripheral blood and bone marrow using Ficoll Plus 1.077 solution (Solarbio, Beijing, China), respectively. Sections of 51 MCL patients with tumor tissues were obtained from Department of Pathology of Peking University Third Hospital between January 2010 to June 2021, and another 12 sections with lymph node reactive hyperplasia (LRH) were served as their controls. The clinical data of the above patients were recorded for analysis. This study was approved by the Ethics Committee of Peking University Third Hospital (S2021251), and all the participants signed informed consent. Cell viability assay Cell viability was determined by Cell Counting Kit-8 assay (CCK-8, APExBIO, USA). Cells were inoculated in 96-well plates at a density of 10 4 cells per well in 100 μl complete medium (RPMI-1640 medium + 10%FBS + 1% penicillin-streptomycin). Cells were exposed to designated concentrations of CDC20 inhibitor apcin (Selleck, USA) or p53 agonist nutlin-3a (Selleck, USA) alone, or co-treated with apcin and nutlin-3a, and incubated for 24 h, 48 h and 72 h. Nutlin-3a could inhibit the interaction between MDM2 and p53 and activate p53 33 , so we identified nutlin-3a as the p53 agonist in this study. At each time point, 10 μl of CCK-8 solution was added to each well, and then incubated for another 4 h. After that, the optical density (OD) value per well was measured at 450 nm using a microplate reader (Tecan Spark, Switzerland). Cell viability at each time point was defined as the ratio of the OD value in the agent-treatment group to that in the corresponding untreated group. In contrast, inhibitory rate = 1-cell viability. Whether combination of apcin and nutlin-3a had synergistic effect was determined by combination index (CI) values calculated by CompuSyn software (ComboSyn, Inc., USA) based on the inhibitory rate of the two agents. EdU cell proliferation assay Cell proliferation was assessed using the BeyoClick ™ EdU Cell Proliferation Kit (Beyotime, Shanghai, China). Cells were inoculated in 24-well plates at a density of 5 × 10 5 cells per well in 1 ml complete medium, and then were treated with different concentrations of apcin or nutlin-3a alone, or co-treated with apcin and nutlin-3a. After cultured for 48 h, 10 μM EdU reagent was added to each well, and the plates were incubated for another 2 h. The stained cells were detected by flow cytometry (Beckman, USA). CytExpert software (Beckman, USA) was used to analyze the EdU incorporation rate, which reflected cell proliferation ability. Cell apoptosis analysis Cell apoptosis was examined using Annexin V-FITC/PI Apoptosis Detection Kit (Vazyme, Nanjing, China). Cells were inoculated in 24-well plates at a density of 4 × 10 5 cells per well in 1 ml complete medium. These cells were treated with different concentrations of apcin or nutlin-3a alone, or co-treated with apcin and nutlin-3a. Cells were collected by centrifugation after 48 h, and incubated with 5 μl Annexin V-FITC and 5 μl PI at the room temperature for 10 min in the dark. The stained cells should be detected by flow cytometry (BD, USA) within 1 h. FlowJo software (Version10.6.2, FlowJo LLC, BD, USA) was used to analyze the total apoptosis rate, which was the sum of the early apoptosis rate and the late apoptosis rate. Mitochondrial membrane potential detection The Mitochondrial Membrane Potential Assay Kit with JC-1 (Solarbio, Beijing, China) was used to detect the changes of mitochondrial membrane potential (MMP) in cells, as well as evaluate the early cell apoptosis. When MMP is high, JC-1 probe appears as aggregates and emits red fluorescence; when MMP is low, JC-1 probe appears as monomers and emits green fluorescence. The transition from red to green fluorescence represents the decrease in MMP, which is a hallmark event of early apoptosis. Cells were inoculated in 24-well plates at a density of 4 × 10 5 cells per well in 1 ml complete medium. These cells were treated with different concentrations of apcin or nutlin-3a alone, or co-treated with apcin and nutlin-3a. Cells were harvested after 48 h, resuspended in 0.5 ml complete medium mixed with 0.5 ml JC-1 staining solution, and incubated at 37 ℃ for 20 min. The stained cells were detected by flow cytometry (Beckman, USA). The results were displayed as the ratio of mean red fluorescence intensity to mean green fluorescence intensity analyzed by CytExpert software (Beckman, USA). Cell cycle analysis Cell cycle was evaluated using the Cell Cycle Detection Kit (KeyGEN, Nanjing, China). Cells were inoculated in 24-well plates at a density of 4 × 10 5 cells per well in 1 ml complete medium. Cells were treated with different concentrations of apcin or nutlin-3a alone, or co-treated with apcin and nutlin-3a. Cells were collected after 48 h CUT&Tag analysis Cleavage Under Targets and Tagmentation (CUT&Tag) is a new technique for investgating the interaction between proteins and DNA fragments. In this study, CUT&Tag technique was used to explore whether CDC20 was a downstream target gene regulated by p53, and the binding site of p53 on CDC20 promoter in Z138 cells treated with or without 5 μM nutlin-3a. Cells were harvested and the cell nuclei were extracted and purified, followed by incubating p53 primary antibody (1:200, #2524, CST, USA) overnight at 4 ℃. On the next day, the secondary antibody was incubated at room temperature for 30 min, and then incubated with hyperactive protein A/G-Tn5 transposase for 30 min to obtain fragmented DNA. The DNA library was purified and amplified for sequencing with the Illumina NovaSeq 6000 platform. For CUT&Tag data analysis, FastQC software was first used to evaluate the quality of the raw data and remove the poor quality data. Low quality reads and adapters were removed by trim-galore, using parameters '-q 20length 20-stringency 3' , so all reads with MAPQ larger than 20 and longer than 20nt are kept for adapter removal and subsequent analysis. BWA or Bowtie software was applied to map the quality filtered data with the reference genome. MACS was available to perform peak calling for each sample, and then genomic site annotation, motif analysis, and GO and KEGG enrichment analysis of target genes were implemented on these peak regions. The IGV tool was employed to convert raw bam files to bigwig files, ensuring the read count data was visualized. Xenograft tumor model in Balb/c nude mice Female Balb/c nude mice (4-5 weeks old) were purchased from the Department of Laboratory Animal Science, Peking University Health Science Center. They were raised in a specific pathogen-free environment with controlled temperature and humidity. Z138 cells were suspended in a mixture of PBS and Matrigel (PBS:Matrigel = 1:1), and each mouse was subcutaneously injected with 200 μl cell suspension containing 10 7 cells into the right flank region to establish MCL model. Tumor size and mice body weight were measured every 2 days, and tumor volume was calculated with the formula V = length × width 2 /2 27 . When the tumor volume reached 100-120 mm 3 (10 days after tumor implantation), the mice were randomly divided into the following 4 groups (n = 6 in each group) and the treatment started: control group (normal saline), nutlin-3a group (40 mg/ kg), apcin group (20 mg/kg), and nutlin-3a (40 mg/kg) plus apcin (20 mg/kg) group. Normal saline, nutlin-3a and apcin were administrated intraperitoneally every other day for 2 weeks (a total of 7 doses). On the first day after the last treatment, 20 μl eyeball blood was collected from each mouse for blood routine test. On the second day after the last treatment, one eyeball of each mouse was removed to get the whole blood, and the serum was obtained by centrifugation for blood biochemistry test. Meanwhile, mice were sacrificed and their hearts, livers, spleens, lungs, kidneys and tumors were harvested for immunohistochemical (IHC) and hematoxylin-eosin (HE) staining. This animal experiment was approved by the Laboratory Animal Welfare Ethics Committee of Peking University Health Science Center (LA2016029). Immunohistochemiscal (IHC) and hematoxylin-eosin (HE) staining Immunohistochemical (IHC) staining was performed on paraffin sections of MCL patients, LRH patients, and mice tumors. Sections were incubated with primary antibodies overnight at 4 ℃. CDC20 (1:500, 10252-1-AP, Proteintech, China) and cyclin D1 (1:1000, 26939-1-AP, Proteintech, China) primary antibodies were used to stain tissues from MCL and LRH patients, while p53 (1:3200, 60283-2-Ig, Proteintech, China), CDC20 (1:200, 10252-1-AP, Proteintech, China), cleaved PARP (1:200, #94885, CST, USA), and Ki-67 (1:400, #12202, CST, USA) primary antibodies were used to stain mice tumor tissues. All sections were scanned with a digital scanner (Aperio Versa 8, Leica, Germany) and images were presented and captured by Aperio ImageScope software (Leica, Germany). The targeted protein expression was analyzed quantitatively by mean optical density (MOD) [36]. Six images of each section at 200 × magnification were randomly captured, and the integrated optical density (IOD) and positive staining area of each image were measured by Image-Pro Plus software (Version 6.0, Media Cybernetics, USA). MOD of each image was acquired by dividing IOD by the positive staining area, and MOD of each section was the average of MOD of six images. The percentage of cyclin D1 positive cells in each MCL patient was calculated by HALO image analysis platform (Version 3.3.2541.345, Indica Labs, USA), defined as the proportion of positive staining cells to total cells on a scanning image. In vivo safety of nutlin-3a and apcin was evaluated by hematoxylin-eosin (HE) staining of mice heart, liver and kidney tissue sections. Sections were routinely deparaffinized and hydrated. Then, they were soaked in Harris hematoxylin (BASO, Zhuhai, China) solution for 5 min to stain nuclei and soaked in eosin (BASO, Zhuhai, China) solution for 40 s to stain cytoplasm and extracellular matrix. Statistical analysis SPSS software (Version25.0, IBM SPSS Inc., Chicago, IL, USA) was used for statistical analysis, and Graphpad Prism software (Version9.0.2, San Diego, CA, USA) was used to draw statistical graphs. Measurement data with normal distribution were presented as mean ± standard deviation (SD) and analyzed by independent t-test (2 groups) or one-way ANOVA (≥ 3 groups). Measurement data with skewed distribution were shown as median, maximum and minimum, and compared by Mann-Whitney U test. Counting data were expressed as number (n) and percentage (%) and evaluated by Chi-square test. For correlation analysis, R language (Version 4.1.2) was used for statistical analysis and drawing statistical charts. Statistical significance was defined as P < 0.05. CDC20 was upregulated in MCL, and was closely related to clinicopathological features and prognosis of MCL patients We first verified the expression level of CDC20 in MCL patients and cell lines. CDC20 mRNA expression was detected by RT-qPCR in PBMCs and BMNCs of MCL patients and healthy controls, and CDC20 protein expression was assessed by IHC analysis of pathological sections of MCL and LRH patients. The results showed that CDC20 expression was significantly increased in PBMCs of MCL patients (n = 24) compared with healthy controls (n = 7) (Fig. 1A). There was no significant difference of CDC20 expression between bone marrow uninvolved MCL patients (n = 6) and healthy donors (n = 10), whereas CDC20 was overexpressed in MCL patients with bone marrow involvement (n = 11) (Fig. 1B). Similarly, IHC staining of CDC20 quantified by MOD revealed that CDC20 protein expression of MCL patients (n = 51) was higher than that of LRH patients (n = 12) (Fig. 1C). Moreover, we found the percentage of cyclin D1 positive cells, a typical IHC marker of MCL, was positively correlated with CDC20 expression level (Fig. 1D). For MCL cell lines, CDC20 expression was significantly upregulated in all the four cell lines at mRNA (Fig. 1E) and protein ( Fig. 1F) levels compared with healthy PBMCs, and CDC20 expression of p53-mutant cells Jeko and Mino was higher than that of p53-wild type cells Z138 and JVM2 (Fig. 1F). The association between CDC20 mRNA expression level and clinicopathological parameters of MCL patients was further evaluated. In 24 MCL patients with PBMCs extracted, subgroup analysis was performed on treatment response, MCL International Prognostic Index (MIPI) score and combined MCL International Prognostic Index (MIPI-c) score. According to treatment response, MCL patients were divided into the complete remission (CR)/partial remission (PR) group (n = 13) and the stable disease (SD)/progressive disease (PD) group (n = 11). The results suggested that CDC20 expression in the CR/PR group was significantly lower than that in the SD/PD group ( Fig. 2A). According to MIPI score, MCL patients was divided into the low risk group (n = 10), the intermediate risk group (n = 8) and the high risk group (n = 6), and the results showed that the higher the risk level, the higher the expression of CDC20 (Fig. 2B). Similar results were also found in the MIPI-c score-based grouping, which classified MCL patients into the low risk group (n = 4), the low-intermediate risk group (n = 9), the high-intermediate risk group (n = 5), and the high risk group (n = 6). The results indicated that CDC20 expression in the low and low-intermediate risk patients was significantly lower than that in the high-intermediate risk and high risk patients (Fig. 2C). Furthermore, the relationship between CDC20 protein expression level and clinicopathological features in 51 MCL patients with CDC20 IHC staining was also analyzed. As shown in Table 2, CDC20 expression level was significantly correlated with Ki-67 expression (P = 0.014), LDH serum level (P = 0.007), tumor stage (P = 0.040), MIPI score (P = 0.008), number of involved lymph node areas (P = 0.036), bone marrow involvement (P = 0.015) and treatment response (P = 0.002), but was not significantly associated with age, gender, B symptoms, pathological type, ECOG score, white blood cell count and β2-MG serum level. Whether CDC20 expression could be served as a prognostic factor of MCL patients was also investigated. Overall survival (OS) was analyzed on GSE93291 dataset (n = 123) from GEO database, and the result implied that MCL patients with high CDC20 expression had shorter OS (Fig. 2D), suggesting the prognostic value of CDC20 in MCL patients. CDC20 inhibition could impede cell proliferation, migration and invasion, and induce apoptosis and cell cycle arrest Apcin, an inhibitor of APC/C-CDC20 [37], was used to demonstrate the effect of CDC20 on the cell phenotype of MCL cell lines. The CCK-8 assay was used to test cell , and MIPI-c score (C). A Patients were divided into the CR/PR group and the PD/SD group according to treatment response, and CDC20 mRNA expression level between the two groups was compared. B Patients were divided into the low risk group, the intermediate risk group and the high risk group according to MIPI score, and CDC20 mRNA expression level among the three groups was compared. C Patients were divided into the low risk group, low-intermediate risk group, high-intermediate risk group, and the high risk group according to MIPI-c score, and CDC20 mRNA expression level among the four groups was compared. D The relationship between CDC20 expression level and overall survival of MCL patients was analyzed via GSE93291 dataset (n = 123). *P < 0.05, **P < 0.01, ***P < 0.001, ns meant P > 0.05 The results showed significant decrease in cell viability of both Z138 and JVM2 cells in a dose-and time-dependent manner, while apcin-treatment almost had no effect on healthy PBMCs (Fig. 3A). Correspondingly, EdU cell proliferation assay also proved that the higher the apcin concentration, the lower the EdU incorporation rate in Z138 and JVM2 cells after 48 h apcin exposure (Fig. 3B). These results confirmed that CDC20 repression inhibited MCL cell growth. Apoptotic cells were analyzed by flow cytometry after Z138 and JVM2 cells treated with apcin for 48 h. Compared with the control group, both apcin-treatment Z138 and JVM2 cells underwent obvious apoptosis, and the percentage of apoptotic cells increased in a dosedependent manner (Fig. 3C). Further, JC-1 fluorescent probe was used to detect the changes of mitochondrial membrane potential (MMP) in MCL cells. As shown in Fig. 3D, the fluorescence ratio in Z138 and JVM2 cells reduced with the increase of apcin concentration, confirming that suppressed CDC20 led to early cell apoptosis. Apoptosis-related proteins were also examined by WB, and the results implied that cleaved caspase-3, cleaved caspase-9 and cleaved PARP all increased after Z138 and JVM2 cells exposed to 50 μM and 100 μM apcin for 24 h and 48 h (Fig. 3E). Cell cycle assay was performed to observe whether CDC20 had certain effect on the MCL cell cycle. Both Z138 and JVM2 cells were found to be arrested at the G2/M phase after treatment with 50 μM and 100 μM apcin for 48 h in Z138 cells or 72 h in JVM2 cells. In addition, G0/G1 phase fraction was decreased in Z138 cells, while S phase fraction was depressed in JVM2 cells (Fig. 3F). These findings suggested that downregulation of CDC20 mainly resulted in G2/M arrest in the MCL cell cycle. Cell migration and invasion abilities with CDC20 inhibition were further examined based on Transwell assays. The number of migratory and invasive Z138 and JVM2 cells in the apcin-treatment group was significantly lower than that in their corresponding control group in a concentration-dependent manner (Fig. 3G and H). Not surprisingly, the protein expression of MMP2 and MMP9, two factors regulating cell migration and invasion, decreased after exposed to apcin in Z138 and JVM2 cells (Fig. 3I). Therefore, low expression of CDC20 could effectively inhibit cell motilities. Activated p53 downregulated CDC20 expression, and the effect of p53 activation on MCL cell phenotype was similar to CDC20 inhibition To clarify the relationship between p53 and CDC20 in MCL, we first analyzed the correlation between p53 and CDC20 via GSE10793 (n = 66) and GSE93291 (n = 123) datasets of MCL patients and 24 MCL patients with PBMCs extracted in our study. As shown in Fig. 4A, the expression level of p53 was negatively correlated with the expression level of CDC20 in GSE10793 and GSE93291 datasets. There was a negative relationship between p53 and CDC20 in our patient cohort, but it had no statistical significance. Besides, RT-qPCR and WB results validated that the mRNA and protein expression level of CDC20 were significantly reduced after p53 activation in p53-wild type Z138 and JVM2 cells treated with nutlin-3a for 24 h, while the expression level of CDC20 had no significant change in p53-mutant Jeko and Mino cells after nutlin-3a exposure ( Fig. 4B and C). Induction of p21 expression was a hallmark of p53 activation since p21 was a defined downstream target of p53. In Z138 and JVM2 cells, p21 was accumulated with nutlin-3a treatment, whereas this phenomenon was not observed in Jeko and Mino cells ( Fig. 4B and C). Collectively, these data illustrated that CDC20 expression was downregulated by p53 in MCL, and activated status of p53 was necessary for CDC20 regulation. We wondered whether p53 activation could achieve similar effects on MCL biological behaviors as those provoked by CDC20 suppression. The CCK-8 results showed nutlin-3a could inhibit cell growth in a dose-and timedependent manner in Z138 and JVM2 cells, while its effect on healthy PBMCs was negligible (Fig. 5A). EdU cell proliferation assay uncovered that 48 h nutlin-3a treatment led to a significant reduction of EdU incorporation in Z138 and JVM2 cells (Fig. 5B), which was accompanied with a concentration-dependent cell apoptosis induction (Fig. 5C). JC-1 test also indicated that the fluorescence ratio decreased with the increase of nutlin-3a concentration in Z138 and JVM2 cells (Fig. 5D). Moreover, increased protein expression of Noxa, Puma, cleaved caspase-3, cleaved caspase-9 and cleaved PARP was found after Z138 and JVM2 cells treated with 5 μM nutlin-3a for 24 h and 48 h (Fig. 5E). Noxa and Puma were p53-regulated pro-apoptotic proteins. Thus, p53 activation in MCL cells could hinder cell proliferation and lead to apoptotic events. Cell cycle was analyzed after Z138 and JVM2 cell incubated with nutlin-3a for 48 h. As Fig. 5F presented, the accumulation in G0/G1 phase as well as the reduction in S phase were observed in both Z138 and JVM2 cells in a dose-dependent manner. Additionally, G2/M arrest was found in Z138 cells after treatment with higher dose of nutlin-3a (5 μM). These findings clarified that p53 activation brought about abnormal cell cycle progression, characterized by increased G0/G1 and G2/M phase fraction and reduced S phase fraction. The impact of nutlin-3a treatment on cell motilities was evaluated. Expectedly, the abilities of cell migration and invasion was significantly attenuated (Fig. 5G and H) with the decreased expression of MMP2 and MMP9 proteins (Fig. 5I) in Z138 and JVM2 cells treated with nutlin-3a compared with the control group, further proving that p53 activation had inhibitory effect on MCL cell motilities. p53 negatively regulated CDC20 expression through directly binding to CDC20 promoter We have demonstrated that p53 activation suppressed the expression of CDC20 at the mRNA and protein level. Activating p53 or inhibiting CDC20 exerted the same effects on MCL cells, including cell proliferation inhibition, apoptosis induction, cell cycle arrest, and impaired migratory and invasive capabilities. Considering that p53 was a fundamental transcription factor, we further investigated the mechanism of how p53 transcriptionally regulating CDC20 in MCL. Dual-luciferase reporter gene assay was performed in 293 T cells to discuss the effect of p53 on CDC20 promoter activity. As displayed in Fig. 6A, after cotransfection with pcDNA3.1-TP53 and pGL4-CDC20, the relative luciferase activity was significantly reduced compared with that of single transfection with PGL4-CDC20, implying that p53 transcriptionally repressed CDC20 promoter activity in 293 T cells. CUT&Tag assay was implemented to further elucidate whether CDC20 was directly regulated by p53 in MCL cells. Untreated and nutlin-3a-treated Z138 cells were harvested and sequentially incubated with p53 primary antibody, secondary antibody and protein A/G-Tn5 transposase for DNA library construction and high-throughput sequencing. The results validated that p53 could directly bind to the promoter region of CDC20, and the binding site was located from 492 bp upstream to 101 bp downstream of TSS. The peak signal of CDC20 promoter was significantly lower in nutlin-3a-treated Z138 cells than that in the untreated group (Fig. 6B). Besides, pre-treatment of Z138 cells with 15 μM PFT-α, an inhibitor that inhibited p53-dependent transcriptional activity, could significantly increase CDC20 expression reduced by apcin treatment (Fig. 6C). Together, these data proved that p53 transcriptionally inhibited CDC20 expression by directly binding to CDC20 promoter. Combination of p53 activation and CDC20 inhibition had augmented anti-MCL activity Considering the significant role of both p53 and CDC20 in MCL development and progression, the combined effect of nutlin-3a and apcin in Z138 and JVM2 cells was researched. As determined by CCK-8 assay, cell viability was greatly reduced in the combination group compared with nutlin-3a or apcin treated alone in Z138 and JVM2 cells, while it was not influenced by combinatory use or single use in healthy PBMCs (Fig. 7A). The CI values analyzed by CompuSyn revealed that combination of nutlin-3a and apcin with different concentrations had synergistic effect on Z138 and JVM2 cells (Table 3). EdU test showed that combinatory use of 1 μM nutlin-3a and 50 μM apcin exerted stronger tumor proliferation inhibition than single use (Fig. 7B). Apoptosis analysis suggested combination of 1 μM nutlin-3a and 50 μM apcin for 48 h caused marked cell apoptosis compared to the treatment with nutlin-3a or apcin alone (Fig. 7C). JC-1 assay exhibited the combination group had lower fluorescence ratio than the single treatment group (Fig. 7D). Further, compared with the single nutlin-3a or apcin group, we found the CDC20 protein expression was lowest and the protein expression of cleaved caspase-3, cleaved caspase-9 and cleaved PARP was highest in the combination group (Fig. 7E). In brief, combination of nutlin-3a and apcin repressed cell proliferation and promoted apoptosis more apparently than single reagent. Furthermore, whether co-treatment with nutlin-3a and apcin had profound effect on cell cycle was evaluated. As shown in Fig. 7F, after co-treatment with nutlin-3a and apcin for 48 h, the proportion of G0/G1 phase and S phase was significantly reduced, while cell number in G2/M phase was significantly increased in Z138 cells. For JVM2 cells, combined treatment for 72 h resulted in cell accumulation in G0/G1 and G2/M phases, while the cell number in S phase was significantly reduced. The results suggested that combined effect of p53 activation and CDC20 inhibition on cell cycle was manifested by a reduction of S phase fraction and cell arrest in G2/M phase. Nutlin-3a/apcin alone or in combination could exert anti-tumor effect in vivo safely In vitro experiments have confirmed the potency of nutlin-3a and apcin to suppress MCL tumorigenesis. Whether nutlin-3a and apcin could have anti-tumor effect in vivo was determined. Human xenograft MCL model was established by subcutaneously injected Z138 cells into the right flank region of each female Balb/c nude mouse. The mice were randomly divided into 4 groups (n = 6 for each group) before administration, including the control group (normal saline, i.p.), the nutlin-3a group (40 mg/kg, i.p.), the apcin group (20 mg/kg, i.p.), and the nutlin-3a (40 mg/kg, i.p.) plus apcin (20 mg/ kg, i.p.) group. Administration frequency was intraperitoneal injection (i.p.) every other day for 2 weeks (a total of 7 doses). All the 24 mice survived until they were sacrificed. The whole animal experiment was summarized in Fig. 8. We first evaluated the treatment efficacy in the four groups. Tumor growth in the nutlin-3a group, the apcin (See figure on next page.) Fig. 4 p53 activation could downregulate CDC20 expression. A GSE10793 (n = 66), GSE93291 (n = 123) and patients in our cohort (n = 24) showed that TP53 expression was negatively correlated with CDC20 expression. B The mRNA expression level of TP53, P21 and CDC20 in Jeko, Mino, Z138 and JVM2 cells were detected by RT-qPCR after treatment with 5 μM (Z138 and JVM2) or 10 μM (Jeko and Mino) nutlin-3a for 24 h. C The protein expression level of p53, p21 and CDC20 in Jeko, Mino, Z138 and JVM2 cells were analyzed by WB after treatment with 5 μM (Z138 and JVM2) or 10 μM (Jeko and Mino) nutlin-3a for 24 h. The increase in p53 expression was accompanied by a decrease in CDC20 expression only in Z138 and JVM2 cells (B, C). Data were obtained from at least three independent experiments and presented as mean ± SD. *P < 0.05, ***P < 0.001 compared with the control group, ns meant P > 0.05 group and the combined group was significantly inhibited compared with the control group, and the combined group showed stronger tumor growth inhibition than respective single group ( Fig. 9A and D). Besides, tumor weight in the three treatment groups was significantly lower than that in the control group (Fig. 9B). Surprisingly, we found treatment with nutlin-3a/apcin alone or in combination all reduced spleen weight, and combination treatment resulted in lower spleen weight than the two single treatment (Fig. 9C and E), indicating that combined injection could alleviate the splenomegaly caused by MCL modeling more effectively. In vivo safety of nutlin-3a and apcin administration was further discussed. Compared with the control group, nutlin-3a/apcin alone or in combination had no significant effect on body weight (Fig. 10A). HE staining showed that no obvious damage of heart, liver, and kidney was observed in the three treatment groups and the control group (Fig. 10B). The blood biochemistry results were also compared to determine whether there was abnormality in heart, liver and kidney function after treatment. Compared with the control group, serum alanine aminotransferase (ALT), aspartate aminotransferase (AST), albumin (ALB), creatinine (CREA) and lactate dehydrogenase (LDH) in the nutlin-3a group, the apcin group and the combined group showed no significant changes (Fig. 10C). Blood routine test was also performed on each mouse, and the results suggested that nutlin-3a/apcin alone or in combination had no significant effect on hemoglobin (HGB), white blood cell count (WBC), platelet count (PLT), lymphocyte count (LY) and ratio of lymphocytes (LY%) (Fig. 10D). These results illustrated the safety and tolerability of in vivo administration of nutlin-3a and apcin. The protein expression level of p53, CDC20, cleaved PARP and Ki-67 in the tumor tissues in these four groups was compared by IHC staining (Fig. 11A). As shown in Fig. 11B, p53 expression was significantly elevated after administration of nutlin-3a alone or combined with apcin, while p53 expression had no significant difference between the nutlin-3a group and the combined group. CDC20 expression was downregulated in the three treatment groups, and the expression level was lowest in the co-treatment group. The protein expression of cleaved PARP, an apoptosis index, was significantly increased in the three treatment groups, and the expression in the combined group was higher than that in the two single treatment group. On the contrary, tumor proliferation was significantly attenuated in the three treatment groups, accompanied by the decreased expression level of Ki-67, and the expression level was lowest in the combined group. Therefore, we validated combined administration of nutlin-3a and apcin promoted tumor apoptosis and suppressed tumor proliferation more potently than single administration. Discussion Although more and more targeted drugs have been used for clinical trials in recent years to prolong the long-term survival of MCL patients and improve their clinical prognosis, most MCL patients still progress to relapsed/refractory cases [3,38]. Therefore, studies on relevant markers in MCL pathogenesis may help to identify potential new targets for MCL therapy. Cell cycle disorder was the major event occurring in MCL, characterized by overexpression of cyclin D1, stimulation of cyclin-dependent kinase (CDK) 4 or 6 and depletion of CDK inhibitor p16 INK4 . These abnormalities ultimately disrupted the G1 phase in the cell cycle and promote the G1/S transition, leading to tumor proliferation in MCL [12,39,40]. Some clinical trials explored the potential of CDK inhibitors for MCL treatment. For example, the overall response rate (ORR) of Palbociclib, a selective CDK4/6 inhibitor, was 18% in MCL patients [41], while it increased to 21% and 67% after combination with bortezomib [42] and ibrutinib [43], respectively. Abemaciclib, another CDK4/6 inhibitor, obtained an ORR of 35.7% in relapsed/refractory MCL patients [44]. AT7519M, a pan-CDK inhibitor targeting CDK1/2/4/5/9, had an ORR of 27% in MCL patients [45]. Flavopiridol, another pan-CDK inhibitor targeting CDK1/2/4/6, got a partial remission rate of 11% in MCL patients [46]. Thus, the therapeutic efficacy of these clinical trials was not ideal, and patients usually suffered from adverse events such as neutropenia, thrombocytopenia and diarrhea. There was an urgent need to identify other cell cyclerelated therapeutic targets. Few studies explored the role of mitotic phase in MCL progression. CDC20, as a substrate-recruiting subunit, could bind to APC/C to form the activated APC/C-CDC20 complex, thereby causing metaphase-anaphase transition, mitotic exit and ubiquitination-mediated degradation of key substrates in cell cycle [15,47,48]. In addition to its pivotal role in regulating cell cycle progression, CDC20 has been reported to have oncogenic effect due to its overexpression in a broad spectrum of cancers. Meanwhile, Fig. 6 p53 transcriptionally repressed CDC20 expression by directly binding to CDC20 promoter. A Dual-luciferase reporter gene assay was conducted to verify whether p53 could regulate CDC20 promoter activity (TSS-1495 bp ~ + 26 bp). The results implied that p53 overexpression in 293 T cells could significantly inhibit CDC20 promoter activity. Relative luciferase activity was expressed as the ratio of firefly luciferase activity to renilla luciferase activity. Data were obtained from three independent experiments and presented as mean ± SD. ***P < 0.001. B CUT&Tag assay proved that p53 directly bound to CDC20 promoter region at TSS-492 bp ~ + 101 bp. Compared with the untreated control group, CDC20 promoter signal was significantly decreased in Z138 cells treated with 5 μM nutlin-3a for 24 h. The red box indicated the binding site of p53 on CDC20 promoter on chromosome 1, and the characteristic signal peaks of the control group (upper) and nutlin3a-treated group (lower) in this region. C Z138 cells were pre-treated with 15 μM PFT-α for 1 h, and then co-treated with 50 μM apcin for 24 h. WB results showed pre-treatment of Z138 cells with PFT-α could rescue CDC20 expression reduced by apcin treatment. Data were obtained from at least three independent experiments and presented as mean ± SD. **P < 0.01, ***P < 0.001, ns meant P > 0.05 CDC20 functioned as the prognostic factor in some tumors, and was closely related to the clinicopathological characteristics of tumor patients [16][17][18][19][20][21][22][23][24][25][26][27]. Based on the multiple roles of CDC20 in cell cycle and carcinogenesis, we considered its role in MCL pathogenesis. Maes et al. has ever demonstrated that APC/C was a new target for the treatment of MCL, while the effect of CDC20 on MCL cells was not explored in depth [34]. Guo et al. conducted a series of bioinformatics analysis on GSE93291 dataset and found that CDC20 was The above data were obtained from at least three independent experiments and presented as mean ± SD. *P < 0.05, **P < 0.01, ***P < 0.001. F After Z138 and JVM2 cells co-treated with 2.5 μM nutlin-3a and 50 μM apcin for 48 h (Z138) or 72 h (JVM2), the proportion of G0/G1, S and G2/M phases in the cell cycle was analyzed by PI flow cytometry. Data were obtained from at least three independent experiments and presented as mean ± SD. *P < 0.05, **P < 0.01, ***P < 0.001 compared with the control group associated with the overall survival of MCL patients, but the authors did not further prove whether CDC20 was indeed an important target in MCL pathogenesis [49]. In the current study, we confirmed that CDC20 was upregulated at both mRNA and protein levels in MCL patients and MCL cell lines possessing either mutant p53 (Jeko and Mino) or wild type p53 (Z138 and JVM2). Among these cell lines, p53-mutant cells obtained higher CDC20 expression level than p53-wild type cells, which might be partly due to the loss of p53 regulation in p53-mutant cells. There was a positive correlation between cyclin D1 and CDC20 in the 51 patients with IHC staining, which meant the more serious the patient's condition, the higher CDC20 expression. We compared the clinical parameters between the high CDC20 expression group and the low CDC20 expression group in MCL patients with PBMCs and tumor sections, and revealed that higher expression of CDC20 implied poor treatment response, worse tumor staging, increased risk of bone marrow involvement and dismal prognosis in MCL patients. Since peripheral blood is easily available from clinical patients, it would be suitable to detect CDC20 expression in PBMCs in the future. Next, we explored the effect of CDC20 on the biological behaviors of MCL cells. The CDC20 inhibitor apcin, which blocked the CDC20-related substrates binding to CDC20, has been proved in several researches to have effect on tumor cell growth and drug sensitivity [50][51][52][53]. In this study, apcin treatment attenuated cell proliferation, migration and invasion, enhanced cell apoptosis, and induced G2/M arrest, explaining that CDC20 inhibition could exert antitumor activity in MCL. We hoped to find the upstream regulators targeting CDC20 and discussed the specific mechanism of its influence on the development and progression of MCL by regulating CDC20. In addition to chromosomal translocation t (11; 14) (q13; q32), p53 inactivation was another common cytogenetic changes in MCL patients, accounting for 20% of newly diagnosed cases [40]. In our study, p53 mutation rate was 37.5% (9/24) in 24 MCL patients with peripheral blood samples. How to treat MCL patients with p53 mutation was challenging. One alternative mechanism of p53 inactivation was MDM2 overexpression, and MDM2 was the negative regulator of p53 [30,54]. Therefore, activating p53 by inhibiting MDM2 might become a new method for MCL treatment. Notably, p53 was a recognized transcription factor, and targeted genes transcriptionally inhibited by p53 were usually upregulated in tumors and considered to be the targets of cancer therapy [31]. We found CDC20 was highly expressed in MCL, together with the fact that p53mutant MCL cells had higher CDC20 expression and p53 inactivation was common in MCL, so we speculated that p53 might be the upstream regulator of CDC20. A previous study performed by Kidokoro et al. showed that p53 negatively regulated CDC20 expression through CDE-CHR elements in the CDC20 promoter. This regulation was in a p53-dependent manner, as CDC20 suppression by adriamycin-induced p53 elevation was observed only in p53 wild-type cells, not in p53 mutant and p53 null cells [31]. Banerjee et al. reported that under genotoxic stress, p53 transcriptionally downregulated CDC20 by directly binding to CDC20 promoter and promoted chromatin remodeling [55]. Another study using bioinformatics analysis concluded that in triple-negative breast cancer (TNBC), CDC20 was regulated by the p53 signaling pathway. After treatment with podophyllotoxin, p53 expression increased and CDC20 expression decreased in TNBC cell lines [32]. Sun et al. proved that CDC20 was a critical downstream factor of MDM2-p53 signaling pathway in diffuse large B-cell lymphoma, and knockdown of MDM2 resulted in upregulation of p53 and downregulation of CDC20 [27]. However, no research explained the interaction of p53 and CDC20 in MCL. We preliminarily discovered the negative correlation between p53 and CDC20 in GSE10793 and GSE93291, two GEO datasets of MCL patients. We also analyzed the relationship between p53 and CDC20 in our own cohort with 24 patients. Although the results had no statistical significance due to small sample size, a negative regulation trend was existed between p53 and CDC20. Nutlin-3a, an agent that inhibited MDM2 to reactivate p53, was chosen to further illustrate the relationship between p53 and CDC20 in MCL. We verified elevated expression of p53 and reduced expression of CDC20 in nutlin-3a-treated p53-wild type MCL cells, not p53-mutant MCL cells. This meant only p53 with activated function qualified to regulate CDC20. Moreover, nutlin-3a had inhibitory effect on MCL tumorigenesis similar as apcin in p53-wild type Z138 and JVM2 cells, indicating the important function of p53-CDC20 regulatory axis in MCL. In terms of mechanism, the dualluciferase reporter gene assay and CUT&Tag technology validated that CDC20 was transcriptionally repressed by p53 via directly binding p53 to CDC20 promoter from upstream 492 bp to downstream 101 bp of TSS in MCL. These findings were consistent with previous studies, further suggesting the negative regulation of p53 on CDC20 might be universal in cancers. As mentioned above, we uncovered the important role of p53 and CDC20 in MCL. Based on the clinical fact that cell cycle dysregulation and p53 inactivation were commonly occurred in MCL patients, we explored the combined effect of nutlin-3a and apcin in MCL. Theoretically, combined inhibition of nutlin-3a and apcin on CDC20 expression might have a stronger anti-MCL effect. In our study, combination of the two agents inhibited CDC20 protein expression to a great extent, and achieved better anti-proliferative and pro-apoptotic activities than single agent. Most importantly, no matter nutlin-3a or apcin were used alone or in combination, they had little effect on the cell viability of healthy PBMCs (Figs. 3A, 5A and 7A), demonstrating the Fig. 9 Treatment with nutlin-3a/apcin alone or in combination exerted anti-tumor effect in the Z138 cell-driven xenograft tumor model. A The tumor growth curves in the control group, the nutlin-3a group, the apcin group and nutlin-3a plus apcin group after treatment. Data were presented as mean ± SD. *P < 0.05, **P < 0.01. B Effect of treatment with nutlin-3a/apcin alone or in combination on tumor weight. Data were presented as mean ± SD. ***P < 0.001, ns meant P > 0.05. C Effect of administration with nutlin-3a/apcin alone or in combination on spleen weight. Data were presented as mean ± SD. *P < 0.05, **P < 0.01, ***P < 0.001. D, E Tumors (D) and spleens (E) in the four groups were harvested and photographed tumor-specific inhibitory effect and safety for normal cells of the two agents in vitro. The therapeutic potential of nutlin-3a and apcin was also examined in vivo. Nutlin-3a/apcin alone or in combination prevented tumor growth and benefited the mice spleens by reversing the splenomegaly caused by MCL modeling, and the combination group showed the best efficacy. We confirmed that injection of nutlin-3a/apcin alone or in combination was safe and tolerable in vivo, as no significant abnormality was found in mice body weight, organ function, blood routine test and blood biochemistry test. IHC staining also revealed that CDC20 protein expression was lower in the combined treatment group than the single treatment group. Meanwhile, increased cleaved PARP expression and decreased Ki-67 expression were more apparent in the combined group than the single group. These data provided initial evidence for the clinical application of nutlin-3a and apcin. Conclusions This study validated the essential role of CDC20 in MCL tumorigenesis as well as the mechanism of how p53 regulated CDC20 in MCL. We confirmed that CDC20 was upregulated in MCL, and high expression of CDC20 was related with poor clinicopathological features and prognosis of MCL patients. Both CDC20 inhibitor apcin and p53 agonist nutlin-3a could inhibit cell proliferation, migration and invasion, and induce apoptosis and cell cycle arrest in MCL cells. Combined treatment of nutlin-3a and apcin in vitro and in vivo enhanced anti-MCL activity. Therefore, dual-targeting p53 and CDC20 is promised to be a prospective MCL treatment strategy, which provides a new insight for MCL therapeutics. The protein expression of p53, CDC20, cleaved PARP and Ki-67 of tumor tissues were determined by IHC quantitative analysis in the control group, the nutlin-3a group, the apcin group, and the nutlin-3a plus apcin group, which were calculated by the MOD value. *P < 0.05, **P < 0.01, ***P < 0.001
2022-03-16T15:13:58.929Z
2023-03-07T00:00:00.000
{ "year": 2023, "sha1": "4adf24549976b91628d02ff11ee01eb2251f8cf7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "48493f5ca9b9d7f0fccc40e4097e54f79f521221", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52099928
pes2o/s2orc
v3-fos-license
14-3-3 and enolase abundances in the CSF of Prion diseased rats ABSTRACT Creutzfeldt-Jakob disease (CJD) is characterized by an extended asymptomatic preclinical phase followed by rapid neurodegeneration. There are no effective treatments. CJD diagnosis is initially suspected based upon the clinical presentation of the disease and the exclusion of other etiologies. Neurologic symptoms are assessed in combination with results from cerebrospinal fluid (CSF) biomarker abundances, electroencephalography (EEG), magnetic resonance imaging (MRI), and in some countries, real-time quaking-induced conversion (RT-QuIC). Inconsistencies in sensitivities and specificities of prion disease biomarker abundance in CSF have been described, which can affect diagnostic certainty, but the utility of biomarkers for prognosis has not been fully explored. The clinical presentation of CJD is variable, and factors such as prion protein polymorphic variants, prion strain, and other genetic or environmental contributions may affect the disease progression, confounding the appearance or abundance of biomarkers in the CSF. These same factors may also affect the appearance or abundance of biomarkers, further confounding diagnosis. In this study, we controlled for many of these variables through the analysis of serial samples of CSF from prion-infected and control rats. Prion disease in laboratory rodents follows a defined disease course as the infection route and time, prion strain, genotype, and environmental conditions are all controlled. We measured the relative abundance of 14-3-3 and neuron-specific enolase (NSE) in CSF during the course of prion infection in rats. Even when disease-related, environmental and genetic variables were controlled, CSF 14-3-3 and NSE abundances were variable. Our study emphasizes the considerable diagnostic and prognostic limitations of these prion biomarkers. Introduction Prion diseases, or transmissible spongiform encephalopathies (TSEs), are neurodegenerative disorders that affect humans and other mammals including sheep, cattle, and cervids. Prion diseases are always fatal and are characterized by an extended preclinical, asymptomatic period, followed by a rapid clinical phase. Prion diseases arise when the normal cellular prion protein (PrP C ) is converted into infectious, protease-resistant prions, PrP CJD , through a process where the α-helical coil structure is refolded into β-sheet [1,2]. Definitive diagnosis of human prion disease involves direct detection of PrP CJD , typically from postmortem brain samples. Ante-mortem methods such as CSF protein biomarkers, EEG, MRI, and RT-QuIC are used in combination with clinical symptoms to diagnose probable CJD [3][4][5]. The inclusion of MRI in the diagnostic criteria and development of direct methods for detecting PrP CJD , such as RT-QuIC, has reduced the dependence on indirect markers for diagnosis, however, 14-3-3 and NSE remains testable with the potential for prognostic utility. Elevated 14-3-3 proteins in the CSF was added to the World Health Organization (WHO) diagnostic criteria for sporadic CJD (sCJD) in 1998 [3,6,7]. 14-3-3s are 28-30 kDa proteins present in all eukaryotic cells, and comprise nearly 1% of soluble brain protein [8][9][10]. Of the seven 14-3-3 proteins, the beta (β) and gamma (γ) family members have been most associated with human prion disease [11,12]. Another biomarker upregulated in the CSF of CJD patients is NSE [13,14]. Increased concentrations of 14-3-3 and NSE in CSF have been suggested as a means to move patients from the possible to probable category of CJD [15,16]. NSE, or gamma-enolase, is a 47 kDa protein specific to neurons and neuroendocrine tissues [17][18][19]. A wide range of sensitivities and specificities have been reported for CSF biomarkers of CJD. For example, 14-3-3 detection ranged from 53-97% in sensitivity and 40-100% in specificity, depending on the study as reviewed by Forner et al, 2015 [20]. The presence of 14-3-3 and NSE in CSF of patients with prion disease is not specific to prion disease; these proteins are present in CSF from patients with other neuronal injury e.g. brain trauma, brain tumors, subarachnoid hemorrhage, stroke, hypoxemia, and encephalitis [21][22][23][24][25][26][27][28]. This lack of specificity may result in false positives if applied to a diverse patient population, therefore, CSF biomarker testing is restricted to patients with clinical symptoms indicative of CJD [29,30]. As patients are assessed for CSF biomarkers after clinical onset, the positive predictive values in CJD studies are often overstated [31]. It has been suggested that many factors impact prion disease biomarker appearance including disease strain, subtype, duration, age at onset, and timing of lumbar puncture [7,[32][33][34][35][36][37][38][39][40]. For example, when 70 sCJD cases with distinct molecular subtypes were evaluated for 14-3-3 using ELISA, the most elevated levels of 14-3-3 were observed in classical molecular subtypes (MM1), and lower levels were observed with less common subtypes (MV2, MM2) with longer durations and atypical presentations of CJD [41]. CSF biomarkers in sCJD have been tracked over the clinical course of disease [42,43] but have not been examined during the preclinical asymptomatic phase of the disease. The progressive neurodegenerative nature of clinical prion disease suggests that biomarker abundances should also change as disease advances, providing a means for prognosis and determination of disease course when assessed over time. The objective of this study was to evaluate the performance of 14-3-3 and NSE in the diagnosis and prognosis of prion disease when time of infection, prion strain and dose, and other variables (age, genetic background, and environmental conditions) were controlled. We found that, even in a controlled study of prion infection, 14-3-3 and NSE abundances were variable, suggesting that dose, age of infection, environment, and Prnp genetics do not confound the abundances of these biomarkers. Development of rat-adapted scrapie The RML strain of mouse-adapted scrapie was transmitted to laboratory rats by intracranial inoculation [44]. Upon subsequent serial passage, the incubation period stabilized at 200 days. Clinical symptoms of ratadapted scrapie (RAS) include ataxia, lethargy, weight loss, kyphosis, myoclonus and an increase in secretions from the Harderian glands. PrP Sc is widely deposited in the cerebral cortex, hippocampus, thalamus, inferior colliculus and granular layer of the cerebellum [44]. A significant increase in Proteinase K resistant PrP is observed between 75 and 113 days post infection with amounts plateauing thereafter ( Figure 1). Time course analysis of 14-3-3s and NSE CSF abundance CSF from control uninfected rats and rats infected with RAS was collected at preclinical and clinical disease time points. To determine differences between infected and uninfected and refine the timeline of detection, samples were pooled (equal volumes) and analyzed by western blot for 14-3-3s and NSE abundances. Samples were standardized by loading equal volumes of CSF. Two monoclonal antibodies, 14-3-3 pan-specific (beta, eta, epsilon, gamma, and zeta proteins) and 14-3-3 gamma-specific, were used for comparison. The pan-specific antibody is currently approved for diagnostic use [3] although some studies have suggested that 14-3-3 gamma has better performance [45]. An elevated 14-3-3 abundance was observed in the CSF of prion-infected rats ( Figure 2). 14-3-3 gamma showed more robust diagnostic sensitivity at earlier time points. 14-3-3s were also detected at the early time points (75 and 113 days) in the uninfected samples. NSE abundance was also elevated in the pooled CSF samples from infected rats at later time points, however, detection at early time points in both infected and control CSF samples suggests a lack of specificity ( Figure 2). 14-3-3 and NSE abundances were variable in both infected and uninfected rats. An advantage of using rat prion disease to model CJD is the ability to analyze CSF samples at defined preclinical time points. Equal volumes of CSF samples from a preclinical time point (148 dpi), and age-matched controls were analyzed by western blot. Considerable variability in both 14-3-3 and NSE abundances was observed in infected individuals ( Figure 3). AUC values were 0.58, 0.61, and 0.69 for 14-3-3 pan, gamma, and NSE, respectively when samples were loaded by volume. When normalizing for protein concentration, AUC values were 0.78, 0.58, and 0.78 for 14-3-3 pan, gamma, and NSE, respectively ( Figure 4). These markers were also detected in CSF from age-matched control samples. This approach was not able to distinguish between infected and control rats. Individual variability in clinically-affected rats (193 days) Two approaches were used to determine whether the biomarker abundances differed between individual animals at the clinical phase of disease: i) analysis by loading equal volumes of CSF and ii) analysis by standardizing protein concentrations of the individual CSF samples. Equal volumes (10 µl) of CSF samples from clinically infected (193 dpi) and from age-matched controls were analyzed by western blot ( Figure 5). 14-3-3s and NSE abundances were variable in the infected CSF samples and these biomarkers were also detected in some uninfected control samples. The diagnostic trade-off between specificity and sensitivity was analyzed by measuring the area under the receiver operating characteristic (ROC) curve. Values of 0.83, 0.81 and 0.66 were determined for 14-3-3 pan, 14-3-3 gamma and NSE, respectively. There was no significant difference in the abundances of the biomarkers between the infected and uninfected samples when loading was based on volume. We subsequently investigated whether standardizing total protein concentration of CSF influenced the diagnostic sensitivity and specificity ( Figure 6). Protein content prior to normalization in CSF samples was variable (infected 0.97 ± 0.80 µg/µl; uninfected 1.02 ± 0.65 µg/µl). Following analysis by western blot (0.4 µg of total CSF protein per lane), the AUC values for 14-3-3s and NSE increased compared to standardization by volume. 14-3-3 pan and gamma increased to 0.99 and 0.97, respectively (p < 0.05), demonstrating that these tests could accurately differentiate infected from uninfected samples in clinically affected animals. The AUC (0.75) for NSE increased as a result of standardization by protein, however this was not significant. Discussion As 14-3-3 is a CJD biomarker in the WHO diagnostic criteria, we investigated the utility of this biomarker in rat prion disease where we could control genetics, environment, prion strain, route, titre and stage of disease. NSE was also analyzed, as it is another marker that has been used to diagnose CJD. We hypothesized that by controlling for these variables the diagnostic accuracy of 14-3-3 and NSE abundances would be improved. Absolute band intensity for 14-3-3 and NSE protein levels were measured from immunoblots at both preclinical and clinical stages of rat prion infection. Samples were standardized by either volume or protein concentration. 14-3-3 CSF protein abundance was significantly higher on average in clinically affected rats when standardized for protein content. A number of individual infected samples could not be distinguished from uninfected. The diagnostic sensitivity of 14-3-3 was low. Neither 14-3-3 or NSE protein abundance distinguished infected from uninfected at the pre-clinical stage of disease, regardless of standardization method. 14-3-3 or NSE CSF abundances did not consistently diagnose infected and uninfected animals in this controlled study. We used the rat model as a method of evaluating biomarkers of prion disease. The rat allows for evaluation of biomarkers at both preclinical and clinical stages of infection in a controlled environment. We found that 14-3-3 and NSE abundances did not track with the disease course and were highly variable in individual CSF samples standardized by either volume or protein concentration. These findings are similar to findings by Torres et al (2012) where 14-3-3 protein levels were inconsistent in longitudinal samples and did not track with advancing disease in CJD patients [43]. Furthermore, use of 14-3-3 detection in the diagnosis of CJD has led to an increase in over diagnosis of sporadic CJD and misdiagnosis of potentially treatable diseases [46,47]. Our data indicates a lack of prognostic utility for 14-3-3 and NSE for prion infection and questions the utility of these biomarkers for clinical diagnosis in the context of improved imaging and development of direct detection of PrP CJD by RT QuIC. Ethics statement This study was carried out in accordance with the guidelines of the Canadian Council on Animal Care. The protocols used were approved by the Institutional Animal Care and Use Committee at the University of Alberta. All animals were anesthetized using isoflurane prior to CSF collection and then euthanized for remaining sample collection. All animals were anesthetized using isoflurane for rapid induction and easy depth control of anesthesia. Animals Weanling female Sprague Dawley rats were used in this study. These animals were housed 2 per cage in Tecniplast Green Line IVC Sealsafe PLUS Rat cages on Lab Aspen Chip bedding. They were fed Picolab Rodent 20 5053C3N. They were kept on a 12 hour light/dark cycle at approximately 21°C. Rats used in this study were inoculated with rat-adapted scrapie (RAS) prion agent that had been adapted to the rat through multiple passages; incubation period had stabilized at approximately 200 days. Previous characterization of this agent in the rat allowed us to predict three preclinical time periods for sample collection and determine a humane endpoint when clinical signs were present [44]. Mortality did not occur outside of planned euthanasia and humane endpoints. Animals were randomly assigned to infected and uninfected groups. Prion infection and CSF collection Weanling female Sprague Dawley rats were inoculated through the fontanelle to a depth of~1 cm using a 25 g needle with 50 µl of 10% rat brain homogenate prepared from uninfected or RAS-affected rats [44]. CSF and brain samples from RAS-infected and age-matched control rats were collected at preclinical (75, 113, and 148 (dpi)) and clinical time points (193 dpi) and compared with uninfected, age-matched control samples. Clinical signs of prion disease were observed at~180 days post-inoculation and included porphyrin staining, myoclonus, kyphosis, ataxia, and weight loss. Infected animals at clinical stage weighed approximately 260g. Uninfected age matched controls weighed approximately 330 g. At specified time points, rats were anesthetized with isoflurane and CSF collected via puncture of the cisterna magna with a 25 gauge neonatal spinal needle (BD Biosciences). CSF volumes of 100-200 µL were routinely obtained and scored for blood contamination (Supplementary Table 1). CSF was centrifuged for 30s at 2,000xg on a benchtop 'nano' centrifuge to assess blood contamination (presence of a red cell pellet) and supernatant was collected. Samples that scored level 3 or higher for blood contamination were discarded. CSF was frozen and stored at −80°C. In some experiments, equal volumes of CSF from each time point were pooled to ensure homogeneity before adding sample buffer and being heated to 100°C. Immunoblotting Individual rat brains were homogenized to 10% (w/vol) in PBS. PK (Roche) digested samples were treated at a ratio of 3.5 µg PK/100 µg protein at 50 µg/ml for 30 min at 37ºC. Rat prion proteins were detected with the SAF83 antibody at a 1: 20,000 dilution and the secondary antibody, goat antimouse AP, was used at 1:10,000 dilution. Membranes were developed using AttoPhos AP Fluorescent substrate (Promega) and imaged with the Image Quant. Statistical analysis Densitometry analysis was performed on western blot data for semi-quantification using Adobe Photoshop CS6 Extended version to determine absolute band intensity. Protein biomarker abundances from infected and uninfected samples were always analyzed together from the same western blot. A two-tailed unpaired t-test was performed; the means (and standard error of the mean) between infected and uninfected were compared. This data was used to generate a receiver operating characteristic (ROC) curve to plot sensitivity vs 1-specificity. The area under the curve (AUC) was calculated to determine the diagnostic accuracy of the biomarkers. AUC values were calculated to determine the ability of each marker to accurately distinguish between healthy and prion infected rats. ROC curves and AUC values were generated using Graphpad Prism 5 Software.
2018-08-29T23:34:40.985Z
2018-07-04T00:00:00.000
{ "year": 2018, "sha1": "0ed82abbab35d051071e12935aa767c429f62f9c", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19336896.2018.1513317?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ed82abbab35d051071e12935aa767c429f62f9c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6984784
pes2o/s2orc
v3-fos-license
Signaling mechanisms of endogenous angiogenesis inhibitors derived from type IV collagen. Vascular basement membrane (VBM) derived molecules are regulators of certain biological activities such as cell growth, differentiation and angiogenesis. Angiogenesis is regulated by a systematic controlled balance between VBM derived antiangiogenic factors and proangiogenic growth factors. In the normal physiological state, equilibrium is maintained between the antiangiogenic and proangiogenic factors. The antiangiogenic factors (molecules), which are generated by the proteolytic cleavage of the VBM, include; alpha1 chain non-collagenous (NC1) domain of type XVIII collagen (endostatin) and the NC1 domains from the alpha chains of Type IV collagen considered as endogenous angiogenesis inhibitors. These collagen derived NC1 domains have a pivotal role in the regulation of tumor angiogenesis, thus making them attractive alternate candidates for cancer therapies. In this review we illustrate a comprehensive overview of the knowledge gained from the signaling mechanisms of Type IV collagen derived endogenous inhibitors in angiogenesis. Introduction Angiogenesis, the sprouting of capillaries from pre-existing blood vessels, or by splitting of blood vessels is among the key events in destructive pathological processes such as tumor growth, metastasis, arthritis, age related macular degeneration etc., as well as in physiological processes such as development, organ growth, reproduction and wound healing (Folkman, 1995a). Folkman's group fi rst reported a hypothesis that tumor growth is dependent on neovascularization or angiogenesis (Folkman, 1995a;Folkman, 1995b). The growth of tumors is strictly dependent on the neovascularization, and the inhibition of vascular supply to tumors can suppress tumor growth (Folkman, 1971;Hanahan and Folkman, 1996). Solid tumors cannot grow beyond 2 to 3 mm in diameter without recruitment of their own blood supply, thus tumor angiogenesis results from a balance between endogenous activators [vascular endothelial growth factor (VEGF), fi broblast growth factor (FGF), and platelet-derived growth factor (PDGF) etc.] and inhibitors [various antiangiogenic peptides generated from VBM or extracellular matrix (ECM) degradation by proteases] (Folkman, 1995a;Kieran et al. 2003;Folkman, 2003). Type IV Collagen Derived Angiogenesis Inhibitors Type IV collagen is the most abundant constituent of the basement membrane (BM) that forms a network like structure in the extracellular matrix. Type IV collagen providing a scaffold in the BM with other macromolecules, such as laminins, heparan sulfate proteoglycans, fi bronectin, entactin and regulates the interaction with adhering cells Kuhn et al. 1981;Timpl, 1996). Type IV collagen is found normally only in the BM, but during pathogenesis, it is associated with tumor fi brosis and accumulates in the tumor interstitium Kuhn et al. 1981). Type IV collagen is composed of six (α1 to α6) distinct gene products and their genomic localization shows a pair-wise head-to-head arrangements with a bi-directional promoter, that were mapped onto three different chromosomes (Hudson et al. 1993;Hudson et al. 1994;Kuhn, 1995). α1 and α2 chains are most abundant forms of Type IV collagen found in most basement membranes (BM) (Hudson et al. 2003). Where as α3-α6 chains are found in kidney a specialized glomerular basement membrane with specifi c functional properties (Hudson et al. 2003). The [α1(IV)] 2 α2(IV) trimers contain a triple helical domain with binding sites for α1β1 and α2β1 integrins (Vandenberg et al. 1991). Initially in 1986, cells binding to Type IV collagen and its inhibition with Type IV collagen peptides has been demonstrated (Aumailley and Timpl, 1986;Tsilibary et al. 1990;Chelberg et al. 1990). Tsilibary in 1990, fi rst reported a peptide that was derived from non-collagenous domain (NC1) of the α1(IV) chain could promote adhesion of bovine aortic endothelial cells ). The functional α1 and α2 Type IV collagen chains isolated from the Engelbreth Holm Swarm Sarcoma tumors inhibited capillary endothelial cell proliferation (Ries et al. 1995;Madri, 1997). The new functions for α2, α3 and α6 NC1 domains of type IV collagen and their integrin ligands inhibiting angiogenesis and tumor growth in vivo reported in 2000 (Petitclerc et al. 2000). Later several laboratories worked on these molecules and further supported antiangiogenic and antitumorogenic activities of these NC1 domains Maeshima et al. 2000;Pasco et al. 2000;Colorado et al. 2000;Marneros and Olsen, 2001;Maeshima et al. 2002;Sudhakar et al. 2003;Hamano et al. 2003;Sudhakar et al. 2005;Roth et al. 2005;Magnon et al. 2005;Borza et al. 2006;Boosani and Sudhakar, 2006;Boosani et al. 2007;Magnon et al. 2007). The molecular signaling mechanisms for regulation of angiogenesis by α1, α2, α3 and α6 NC1 domains of Type IV collagen are updated in this review. Understanding the mechanism(s) of action of such molecules would aid in unraveling their therapeutic applications. α1(IV)NC1 or arresten α1(IV)NC1 is one of the recently identifi ed endogenous inhibitors of angiogenesis. It is a 26-kDa molecule derived from the NC1 domain of the α1 chain of Type IV collagen by proteases Sudhakar et al. 2005;Boosani et al. 2006). The extensive studies from my laboratory and others suggest that α1(IV)NC1 functions via α1β1 integrin and blocks the binding of α1β1 integrin to the Type IV collagen Sudhakar et al. 2005). Integrin α1β1 is a collagen binding receptor that also binds to other basement membrane components such as laminin (Zutter and Santoro, 1990;Keely et al. 1995). Both α1 and β1 integrins are involved in angiogenesis (Senger et al. 2002). Using the neutralizing antibodies for α1 integrin, angiogenesis associated with tumor growth could be suppressed. Blocking of α1β1 integrin interactions with ECM inhibits angiogenesis, which indicates that the integrins α1β1 acts as proangiogenic receptors (Senger et al. 2002). Among the integrin receptors for collagen, α1β1 integrin activates the Ras/Shc mitogen activated protein kinase (MAPK) pathway promoting cell proliferation (Senger et al. 2002). We demonstrated that α1(IV)NC1 binds to α1β1 integrin in a collagen type IV dependent manner and mediates all of its antiangiogenic functions through this integrin and inhibits angiogenesis by inhibiting endothelial cell proliferation, migration and tube formation Boosani et al. 2006). α1(IV)NC1 might also function via binding to heparan sulfate proteoglycans. Previously heparan sulfate proteoglycan was reported to bind to α1(IV)NC1 domain . Signifi cant halt in pathological angiogenesis and tumor growth was reported in α1 integrin knockout mice (Pozzi et al. 2000;Sudhakar et al. 2005). Whereas, α1(IV)NC1 had no effect in α1 integrin knockout mouse lung endothelial cells ). On the contrary, it signifi cantly inhibited proliferation of wild type mouse lung endothelial cells. Thus confi rms the signifi cance of integrin mediated signaling of α1(IV)NC1 . In endothelial cells, ligand upon binding to integrins induces FAK phosphorylation, which serves as a platform for different downstream signals (Hynes, 2002;Kim et al. 2002;Sudhakar et al. 2003). Classical integrin ligand interactions are known to initiate intracellular signaling pathways, however some of such signaling events are reported to be inhibited by α1(IV)NC1 by binding to α1β1 integrin . α1(IV)NC1 inhibits phosphorylation of FAK when mouse lung endothelial cells (MLEC) are plated on collagen type IV matrix . Similar inhibition of FAK phosphorylation was not observed with α1(IV)NC1 treatment in α1 integrin knockout MLEC cells . Downstream to FAK, protein kinase B (Akt/PKB) plays an important role in endothelial cell survival signaling (Shiojima and Walsh, 2002;Sudhakar et al. 2003;Sudhakar et al. 2005). α1(IV)NC1 does not inhibit Akt or phosphatidyl-3-kinase (PI3 kinase) phosphorylation suggesting that α1(IV) NC1 regulates migration of endothelial cells in an Akt-independent manner . Interestingly hypoxia induced factor alpha (HIF-1α) expression was inhibited by treatment of α1(IV)NC1 in hypoxic (lack of oxygen) endothelial cells . HIF-1α is an oxygendependent transcriptional activator, which plays crucial roles in the tumor angiogenesis (Semenza, 2003;Lee et al. 2004). HIF-1α regulates cellular responses to physiological and pathological hypoxia, and studies demonstrate that HIF-1α is a potential target for tumor angiogenesis (Wu et al. 2003;Unruh et al. 2003). HIF-1α transcriptionally regulates VEGF expression in hypoxic cells and promotes angiogenesis in solid tumors (Kung et al. 2000;Miller et al. 1994;Carmeliet et al. 1998;Sudhakar et al. 2005). These fi ndings suggest that HIF-1α is a prime target for anticancer therapies. Our recently published fi ndings demonstrate that α1(IV)NC1 binds to α1β1 integrin on endothelial cells and inhibits MAPK signaling, which results in inhibition of HIF-1α expression ( Fig. 1) . Wild type tumor bearing mice when treated with α1(IV)NC1, decreased circulating VEGFR2 positive endothelial cells, and such observations were not seen in MLECs of integrin α1 knockout mice. Measuring the number of circulating endothelial cells is being evaluated as pharmacodynamic marker (Hurwitz et al. 2004). These studies provide a rationale for the use of α1(IV)NC1 as an inhibitor of HIF-1α and VEGF in hypoxic endothelial cells . This hypoxic inhibitory activity might be exploited for antiangiogenic therapy in the treatment of cancer, but more preclinical laboratory studies are needed. α2(IV)NC1 or canstatin Proteolytic degradation of type IV collagen liberates a 24-kDa peptide from α2 chain, called α2(IV)NC1, this peptide was reported to inhibit tumor associated angiogenesis (Petitclerc et al. 2000). The exact mechanisms by which this NC1 domain of TypeIV collagen inhibits tumor angiogenesis is not completely understood. α2(IV)NC1 binds to the endothelial and tumor cell surface in an αVβ3 and αVβ5 integrin dependent manner (Panka and Mier, 2003;Roth et al. 2005;Magnon et al. 2005;Magnon et al. 2007). α2(IV)NC1 competes with Type IV collagen of ECM for cell surface integrin binding and reverses the proliferative and migratory effects induced by cell-ECM interactions . Thus, αVβ3 and αVβ5 integrins appear to mediate the antiangiogenic and antitumorgenic properties of α2(IV)NC1 (Magnon et al. 2005). In addition, researchers also determined that α2(IV)NC1 binds to αVβ3 and αVβ5 integrins and induce apoptosis in endothelial and certain tumor cells (Magnon et al. 2005). α2(IV)NC1 inhibits the growth of many tumors in human xenograft mouse models, histological studies revealed decreased CD31 positive vasculature (Petitclerc et al. 2000;Kamphaus et al. 2000;Roth et al. 2005;Magnon et al. 2005;Magnon et al. 2007). α2(IV)NC1 strongly inhibits the migration and proliferation of endothelial cells. Moreover, these events are mediated by an upstream event involving α2(IV)NC1 binding to αVβ3 and αVβ5 integrins. Recent fi ndings have shown that α2(IV)NC1 inhibits the phosphorylation of Akt, FAK, mammalian target of rapamycin (mTOR), eukaryotic initiation factor 4E binding protein-1 (4E-BP1), and ribosomal S6 kinase in cells (Panka and Mier, 2003). Collectively, the available research information suggests that, α2(IV)NC1 binds to αVβ3 and αVβ5 integrins and inactivates FAK down stream signaling, leading to suppression of cell proliferation and migration and thus leading to apoptosis Panka and Mier, 2003). α2(IV)NC1 binds to αVβ3 and αVβ5 integrins and initiates two apoptotic pathways that include activation of caspase-8 and -9, (both initiators of the downstream apoptotic process) and leads to activation of caspase-3 (Roth et al. 2005, Magnon et al. 2005. α2(IV)NC1 activates caspase-8 by downregulation of Flip levels. Upregulation of Fas/Fas ligand triggers not only cell death directly through caspase-3 activation but also indirectly through mitochondrial damage via activation of caspase-9 within the apoptosome. On the other hand, phosphorylated FAK/PI3K is known to inactivate the mitochondrial apoptotic pathway by inhibition of caspase-9 (Magnon et al. 2005). So, α2(IV)NC1 directly activates procaspase-9 through inhibition of the FAK/PI3K pathway and amplifi es the Fas-dependent pathway in mitochondria. Caspase activation might be exploited for antitumorogenic therapy in the treatment of cancer. α3(IV)NC1 or tumstatin A 28-kDa proteolytic peptide liberated from the NC1 domain of α3 chain of Type IV collagen by MMP-9 and 2, has been shown to inhibit the proliferation of melanoma and other epithelial tumor cell lines in vitro by binding to the CD47/αVβ3 integrin complex (Monboisse et al. 1994;Han et al. 1997;Shahan et al. 1999;Petitclerc et al. 2000;Hamano et al. 2003). In vivo over expression of α3(IV)NC1 domain in tumor cells inhibited their invasive properties in mouse melanoma model Pasco et al. 2005). α3(IV)NC1 inhibits formation of new blood vessels in Matrigel plugs and suppresses tumor growth of human renal cell carcinoma and prostate carcinoma in mouse xenograft models and this is associated with in vivo endothelial cell specifi c apoptosis (Petitclerc et al. 2000;Maeshima et al. 2000). The antiangiogenic activity of α3(IV)NC1 is localized to two distinct integrin binding region of the molecule that is separate from the region responsible for the antitumor cell activity Borza et al. 2006;Boosani et al. 2007). αVβ3 binds in the NH2terminal end (54-132 amino acid region) of the α3(IV)NC1 that is associated with the antiangiogenic activity and α3β1 binds in the COOHterminal end (185-203 amino acid region) that is associated with the antitumor activity (Shahan et al. 1999;Floquet et al. 2004). These two distinct integrin binding sites of α3(IV)NC1 mediating two distinct antiangiogenic and antitumorogenic activities was recently reported by Boosani et al. (Fig. 3) (Boosani et al. 2007). (2) Activation of caspase-8 and -9 leading to activation of caspase-3. α2(IV)NC1 activates pro-caspase-8 and -9 directly through inhibition of FAK/PI3K/Akt/mTOR pathway. α2(IV)NC1 also indirectly enhances the mitochondrial pathway through Fas dependent caspase-8 activation, which results in inhibition of protein synthesis, DNA damage and cell death. The signaling mechanism involving inhibition of endothelial cell-specifi c protein synthesis by α3(IV)NC1 binding to αVβ3 integrin was reported previously (Maeshima et al. 2002;Sudhakar et al. 2003). This mechanism has since been implicated in inhibition of tumor growth from several tumor cell lines such as CT26 (colon adenocarcinoma), LLC (Lewis lung carcinoma), renal cell carcinoma (786-O), prostate carcinoma (PC3), human prostate cancer (DU145), human lung cancer (H1299), and human fi brosarcoma (HT1080), by inhibiting tumor angiogenesis (Petitclerc et al. 2000;Miyoshi et al. 2006;Borza et al. 2006;Maeshima et al. 2000). The antiangiogenic activity of α3(IV)NC1 upon its interaction with αVβ3 integrin, inhibit activation of FAK, PI3K, Akt/protein kinase B, mTOR pathways and prevents the dissociation of eIF4E protein from 4E-BP1 leading to the inhibition of Cap-dependent translation (Maeshima et al. 2002;Sudhakar et al. 2003). Furthermore, these fi ndings indicate the role for integrins in mediating cell specifi c inhibition of protein translation that suggests a potential mechanism for the specifi c effects of α3(IV)NC1 on endothelial cells . Recently our laboratory has identifi ed the signaling mechanism mediated by α3(IV)NC1 that inhibits hypoxia induced cyclo-oxygenase-2 (COX-2) expression in endothelial cells via FAK/ Akt/NFκB pathways, and leads to decreased tumor angiogenesis and tumor growth in an α3β1 integrin dependent manner (Boosani et al. 2007). COX-2 is a key enzyme involved in conversion of arachidonic acid to prostaglandins (PGs) and other eicosanoids (Hla and Neilson, 1992). Two isoforms of COX were identifi ed; COX-1 is expressed constitutively, whereas COX-2 is induced by a variety of factors, including cytokines, growth factors, and tumor promoters (Hla and Neilson, 1992;DuBois et al. 1994). Mitogens such as tumor necrosis factor, phorbol ester, lipopolysaccharide, or interleukin-1 are known to increase the steady-state levels of COX-2 (Jones et al. 1993;Michiels et al. 1993). Hypoxia induces COX-2 expression by nuclear transcription factor-kappa B (NFκB) (Schmedtje et al. 1997;Tamura et al. 2002). There is ample evidence that COX-2 over expression contributes to carcinogenesis and that COX-2 disruption can both prevent and treat a variety of solid tumors (Wu et al. 2003;Wu et al. 2004;Tamura et al. 2002;Subbaramaiah et al. 1997). NFκB plays an essential role in many diseases such as AIDS, atherosclerosis, asthma, arthritis, diabetes, infl ammatory bowel disease, muscular dystrophy, stroke, viral infections, cancer and is a possible target of therapeutic intervention (Kumar et al. 2004;Shishodia and Aggarwal, 2004). NFκB may facilitate the induction of COX-2 by lipopolysaccharide and phorbol ester in concert with the nuclear factorinterleukin-6 expression site and a cAMP responsive element site in bovine aortic endothelial cells (Inoue et al. 1995;Yamamoto et al. 1995). In endothelial cells, α3(IV)NC1 binds to α3β1 integrins and inhibits NFκB signaling resulting in inhibition of COX-2 mediated signaling. It was further proved that expression of COX-2 was inhibited in β3 integrin knockout endothelial cells upon treatment with α3(IV)NC1, indicating that COX-2 mediated signaling is regulated through α3β1 and not by αVβ3 integrin (Boosani et al. 2007). Interestingly COX-2 expression was not affected when hypoxic α3 integrin knockout ECs were treated with α3(IV)NC1 protein, confi rming that COX-2 expression was regulated by α3β1 integrin (Boosani et al. 2007). These fi ndings strongly suggest that α3(IV)NC1 has the ability to inhibit pro-infl ammatory factor COX-2, and inhibit tumor vasculature and tumor growth in an α3β1 integrin dependent manner (Boosani et al. 2007). In addition to COX-2 inhibition, the COX-2 regulated down stream VEGF and bFGF protein expression was also inhibited upon α3(IV)NC1 treatment to endothelial cells (Boosani et al. 2007). COX-2 was also reported to play a key role in tumor angiogenesis (Leung et al. 2003;Harris, 2002). Moreover, several investigators have demonstrated that blockade of the COX-2 mediated pathway serves as a therapeutic benefi t in different cancer models (Gately and Kerbel, 2003;Panka and Mier, 2003;. COX-2 regulates cellular responses to pathological conditions and studies have demonstrated that COX-2 is a potential target for tumor angiogenesis Gately and Kerbel, 2003;. The antitumorogenic activity of α3(IV)NC1 under hypoxic conditions in solid tumors was not clearly understood earlier. Our studies shed light on this mechanism by demonstrating that α3(IV)NC1 binds to α3β1 integrins which inhibit COX-2 expression both in vitro and in vivo (Boosani et al. 2007). It is clear that inhibition of hypoxia induced angiogenesis by α3(IV)NC1 is a complex process requiring further investigation. Our previous fi ndings indicate that there may be several targets for the inhibitory effects of α3(IV)NC1 on tumorangiogenesis, including or in addition to COX-2, VEGF and bFGF (Boosani et al. 2007). In summary, the in vitro and in vivo observations support the role of αVβ3 and α3β1 integrins for the antiangiogenic activity of α3(IV)NC1. While both these integrins mediate tube formation in cultured ECs, α3β1 integrin mediates signaling events that influences downstream effects of COX-2 expression which appears to be central to the mechanism of α3(IV)NC1 antitumor activities. Our studies also demonstrate that α3(IV)NC1 inhibits hypoxia induced angiogenesis by (1) inhibiting NFκB activation, leading to (2) inhibition of COX-2 expression, which in turn results in (3) down regulation of hypoxia induced VEGF/bFGF expression (Fig. 3) (Boosani et al. 2007). These fi ndings have potential implications of α3(IV)NC1 for treatment of solid tumor growth, which depend critically on hypoxic angiogenesis. The decrease in COX-2 expression under hypoxia that results in decreased VEGF/bFGF expression will likely represent a primary molecular mechanism by which α3(IV)NC1 inhibit the pathological angiogenesis that is essential to the growth of tumors (Boosani et al. 2007). α6(IV)NC1 In addition to the NC1 domains of collagen IV α1,α2, α3 chains, α6(IV)NC1 domain also possesses antiangiogenic activity and inhibits tumor growth (Petitclerc et al. 2000), but a clear and extensive analysis of this molecule are yet to be unraveled. Conclusions and Future Directions Type IV collagen derived endogenous angiogenesis inhibitors bind to different cell surface integrins and exert their effects through multiple mechanisms that include induction of endothelial cells apoptosis, inhibition of migration, proliferation, tube formation of endothelial cells, and inhibit or alter the functions of proangiogenic growth factors. Three possible conclusions can be drawn from the signaling mechanisms of Type IV collagen derived angiogenic inhibitors that are shown in Table 1. (1) All these collagen type IV derived inhibitors appears to exert their antiangiogenic effects by binding to specifi c cell surface integrins. (2) These inhibitors also block the binding of natural ligand/binding partners for proangiogenic receptors/molecules. (3) In addition, possibly by binding to its receptors, these inhibitors crosstalk with other cell surface receptors and activate specifi c caspase mediated signaling to regulate cell function (Panka and Mier, 2003;Magnon et al. 2005). Currently, more than 25 different endogenous circulating molecules (small proteins or peptides) are found to exist in the human body that functions as angiogenesis inhibitors. Circulating physiological concentration of α3(IV)NC1 was reported in normal mice to be about 336 ng/ml, that was absent in α3 chain of Type IV Collagen null mice (Hamano et al. 2003). Administration of 300 ng of recombinant α3(IV)NC1 to physiological levels in α3 chain of Type IV Collagen null mice with LLC tumors showed decrease tumor growth, the number of blood vessels and circulating endothelial cells to the wild-type baseline levels (Hamano et al. 2003;Sund et al. 2005). It is quite possible that genetic control of the physiologic levels of these endogenous angiogenesis inhibitors might contribute to a critical line of defense against the conversion of dormant neoplastic events into a malignant phenotype of cancer. Several angiogenic inhibitors including integrin αV antagonist EMD 121974, 2-methoxyestradiol (panzam) and, MMP-2 and -9 inhibitor COL-3 etc are currently in phase 1/2 human clinical trails (Jansen et al. 2004). Questions regarding resistance to these angiogenesis inhibitors do remain unanswered; however, a combination of radiation therapy with other antiangiogenic therapies may also prove to be clinically useful and effective. Further evaluation through extensive laboratory studies on these molecules are needed to address the function of Type IV collagen derived endogenous inhibitors of angiogenesis to be considered for the clinical trials. Earlier lessons from preclinical trials of angiostatin, endostatin, Thrombospondin-1 (ABT-510) and 2-ME suggest that more basic laboratory research studies are required to better understand the mechanism of actions associated with each of these endogenous angiogenesis inhibitor molecules. Presently, some of the anti-angiogenic agents such as Bevacizumab and several other VEGFR tyrosine kinase inhibitors; Vatalanib (PTK787/ZK 222584), Semaxanib (SU5416), Sunitinib (SU11248), Sorafenib (BAY 43-9006) are in clinical trials (Hurwitz et al. 2004;Morabito et al. 2006). In the past few years several advances were made VBM derived endogenous angiogenesis inhibitors functional studies. VBM not only is an important structural component of the blood capillary, but it is also an important functional regulator of tumor angiogenesis and tumor growth. VBM in an assembled form performs completely new role compared with degraded form (exposed to different proteases). The degraded VBM modulate cellular behavior, hiding or exposing basement membrane integrin binding sequences. Therefore, VBM has become very good source of a collection of peptides or proteins that posses distinct activities with in the same primary sequence. These sequences are available at different stages during VBM structural changes; just like as the coagulation pathway proteins. Our understanding of how these collagen Type IV derived angiogenesis inhibitors regulate angiogenesis has just began compared to type XVIII collagen derived angiogenesis inhibitor or endostatin. Further extensive laboratory studies are required to know how Type IV collagen derived molecules regulating cellular functions to halt tumor growth and tumor angiogenesis.
2017-09-02T08:04:10.561Z
2007-10-14T00:00:00.000
{ "year": 2007, "sha1": "f0923f485f51d11d2092dd4fc85cfa4ee2128d77", "oa_license": "CCBY", "oa_url": "http://journals.sagepub.com/doi/pdf/10.4137/GRSB.S345", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f0923f485f51d11d2092dd4fc85cfa4ee2128d77", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
237442322
pes2o/s2orc
v3-fos-license
Efficacy and safety of tolperisone versus baclofen among Chinese patients with spasticity associated with spinal cord injury: a non-randomized retrospective study There are many medications available to treat spasticity, but the tolerability of medications is the main issue for choosing the best treatment. The objectives of this study were to compare the efficacy and adverse effects of tolperisone compared to baclofen among patients with spasticity associated with spinal cord injury. Patients received baclofen plus physical therapy (BAF+PT, n=135) or tolperisone plus physical therapy (TOL+PT, n=116), or physical therapy alone (PT, n=180). The modified Ashworth scale score, the modified Medical Research Council score, the Barthel Index score, and the Disability Assessment scale score were improved (P<0.05 for all) in all the patients at the end of 6 weeks compared to before interventions. After 6 weeks, the overall coefficient of efficacy of the intervention(s) in the BAF+PT, TOL+PT, and PT groups were 1.15, 0.45, and 0.05, respectively. The patients of the BAF+PT group reported asthenia, drowsiness, and sleepiness and those of the TOL+PT group reported dyspepsia and epigastric pain as adverse effects. When comparing drug interventions to physical therapy alone, both baclofen plus physical therapy and tolperisone plus physical therapy played a significant role in the improvement of daily activities of patients. Nonetheless, baclofen plus physical therapy was tentatively effective. Tolperisone plus physical therapy was slightly effective. In addition, baclofen caused adverse effects related to the sedative manifestation (Level of Evidence: III; Technical Efficacy Stage: 4). Introduction Motor neuron dysfunction due to defects in inhibitory descending motor pathways of the spinal cord leads to spasticity (1,2). Hyperexcitability of the stretch reflex, exaggerated tendon jerks, and a velocity-dependent increase in tonic stretch reflexes are clinical signs of spasticity (1). Another characteristic is an increase in muscle tone (3). Treatment of spasticity is based on the rehabilitation of patients to improve daily activities (2). Muscle relaxants work through polysynaptic reflex mechanisms. Therefore, they are good for the treatment of spasticity associated with spinal cord injury (4). Baclofen is a g-aminobutyric acid b-agonist, a central muscle relaxant, and approved by the United States Food and Drug Administration (USFDA) for treatment of spasticity associated with spinal cord injury (5). It is effective within a week of intervention (1), but it is an addictive drug and causes sedation, dizziness, drowsiness, and other adverse effects during treatment (2). Tolperisone is also a centrally acting muscle relaxant (sodium and calcium channel blocker at brain stem) and has no sedation and withdrawal symptoms (1). A prospective study in the Indian population (1) shows the superiority of tolperisone over baclofen among patients with spasticity associated with spinal cord injury, cerebral palsy, or post-stroke, but this prospective study had a small sample size. For the pharmacological management of spasticity, baclofen, dantrolene, tizanidine, and diazepam are commonly used initial medications (6,7). Baclofen is generally considered the first-line treatment in spinal cord injury and can be very effective despite its side effects (3). On the other hand, tolperisone is mainly prescribed for acute muscle spasms and is not approved by the USFDA for the treatment of spasticity. The comparison of efficacy and safety of these two drugs has not been studied in depth. The objectives of this non-randomized retrospective analysis were to compare muscle tone, muscle strength, functional outcomes, disability assessment, and treatment-emergent adverse effects of tolperisone plus physical therapy with those of baclofen plus physical therapy and non-treatment interventions among Chinese patients with spasticity associated with spinal cord injury. Material and Methods Ethical consideration and consent to participate The designed protocol (GPH/CL/04/20 dated 5 August 2020) was approved by the Ganzhou People's Hospital review board. The study adhered to the law of China and the 2008 Declaration of Helsinki. An informed consent form was signed by patients and/or relatives (the legally authorized person) of the patients regarding intervention(s) and publication of the anonymized information of patients in the form of an article before treatment. Inclusion and exclusion criteria Patients aged 18 years and above, experiencing spasticity of hip adductor muscles, medial hamstring muscle, or the lower limbs associated with spinal cord injuries (6 months of history), and requiring rehabilitation in performing daily activities (patients had modified Ashworth scale 2 or less, modified Medical Research Council score 2 or less, Barthel Index functional outcomes score 50 or less before treatments) were included in the analysis. Patients aged below 18 years, who had an orthopedic fracture, concomitant neurological disease before treatment, were pregnant, and who had a loss of locomotion other than spasticity were excluded from the analysis. Sample size calculation The study was based on the assumption that 80±5% of patients would reach a modified Ashworth score of more than 2 and a 15% drop-out over the intervention period of treatment and/or non-treatment intervention(s). The sample size was calculated on this assumption of muscle tone, a 5% two-sided type-I error (a=0.05), and 80% power (b=0.2) at 95% level of confidence. The sample size (minimum patients required in each group) was 115 (8). Patient groups and therapy A total of 135 patients with spasticity who had been using dextromethorphan for cough, viral infections, and/or myasthenia gravis received 5 mg baclofen (BAF; Actavis-UK, Ltd., UK) three times a day. The dose was increased by 5 mg/week. The titration was carried out up to 80 mg/ day (2). Patients also received physical therapy. These patients were included in the BAF+PT group. A total of 116 patients with spasticity who had lactose intolerance and/or stomach ulcer, problems with lungs, bladder, and/ or diabetes mellitus received 150 mg/day in three divided doses of tolperisone (TOL; Myolax, Incepta Pharmaceuticals Ltd., Bangladesh). Titration was carried out up to 600 mg/day (1). Patients also received physical therapy. These patients were included in the TOL+PT group. A total of 180 patients with spasticity who had impairment of kidney function, hepatic function, and heart functions, were on antidepressant therapy, on Alzheimer's disease therapy, were planned for surgery under anesthesia, had a history of skin allergies, porphyria (an inherited condition causing skin blisters, abdominal pain, and agitation), and/ or epilepsy (susceptible to baclofen and tolperisone) received physical therapy only (9,10). These patients were included in the PT group. The adjustment of baclofen and tolperisone dose was made at 2, 4, and 6 weeks. Physical therapy Physical therapy included 1 h/day locomotor training, i.e., body weight-supported treadmill training, walking practice on the ground or on a treadmill, stepping practice, and walking practice in and out of exercise stations (11). Intensive task-specific training, for example, walking, sitto-stand transfers, and standing, was also included (12). Physical therapy was performed by physiotherapists with a minimum of 3 years of experience at institutes. Physiotherapists were blinded for the groups. Physiotherapy was different according to the lesion (cervical, dorsal, or lumbar). Outcome measures Outcome measures were evaluated by physiotherapists with a minimum of 3 years of experience at institutes at the end of 2, 4, and 6 weeks of treatment and/or nondrug intervention. The tone of a spastic muscle was evaluated using the modified Ashworth scale and the strength of a spastic muscle was evaluated using the modified Medical Research Council score (2), as shown in Table 1. Functional outcomes were evaluated using the Barthel Index score for 10 activities. The activities included use of the toilet, bladder continence, bowel continence, ambulation, feeding, bathing, dressing, grooming, stair climbing, and transfers. Each activity has a score range of 0 to 10: 0 indicates dependency and 10 indicates independency (perform activity without the help of a human). The total score is 100 (13). The coefficient of efficacy after 6 weeks of intervention for each outcome measure was evaluated as per Equations 1, 2, 3, and 4, respectively: The overall coefficient of efficacy after 6 weeks of intervention was the sum of the coefficient of the efficacy of each outcome measure after 6 weeks of interventions divided by the number of outcomes evaluated (i.e., 4) as per Equation 5, where n=number of outcome measures. If the overall coefficient of efficacy was X3, then treatment was considered highly effective, from 2-2.99, sufficiently effective, from 1-1.99, tentatively effective, from 1-0.40, slightly effective, and o0.40, then treatment was considered ineffective (1). Overall coefficient of efficacy ¼ X n 1 Coefficient of efficacy n Equation 5 Data regarding treatment-emergent adverse effects of patients during 6 weeks of treatment and/or non-drug interventions were retrospectively collected from the patients' records at the institutes. Statistical analysis InStat, 3.01 (GraphPad, USA) was used for statistical analysis. One-way analysis of variance (ANOVA) followed by the Tukey test (considering critical value (q)43.322 as significant) for continuous and ordinal variables between groups and the repeated measures ANOVA followed by the Tukey test (considering critical value (q)43.646 as significant) for continuous and ordinal variables within the group were performed for statistical analysis (2). The Fisher exact test (for two columns and two rows) or the chi-squared of independent samples (more than two columns and two rows) was performed for categorical data. Results were significant if Po0.05. Study population From January 15, 2018 to July 1, 2020, a total of 452 patients were reported with spasticity at the Department of Spine Surgery of the Ganzhou People's Hospital, China, Significant improvement but affected part is not moved easily 3 Significant improvement and affected part is moved easily 4 Muscle tone is increased but the passive movement is difficult 5 Muscle tone is increased and passive movement is reported Modified Medical Research Council score 0 No contraction 1 Small contraction 2 Movement with gravity 3 Movement reported against gravity but not against resistance 4 Movement reported against gravity and slight resistance 5 Movement reported against gravity and strong resistance 6 Normal power Coefficient of the efficacy of muscle tone after 6 weeks of treatment ¼ Number of patients with 4 or 5 modified Ashworth scale after 6 weeks of intervention Number of patients with 0; 1; or 2 modified Ashworth scale after 6 weeks of intervention Equation 1 Coefficient of the efficacy of muscle strength after 6 weeks of treatment ¼ Number of patients with 5 or 6 modified Medical Research Council score after 6 weeks of intervention Number of patients with 0; 1; or 2 modified Medical Research Council score after 6 weeks of intervention Equation 2 Coefficient of the efficacy of functional outcomes after 6 weeks of treatment ¼ Number of patients with 75 Barthel Index score after 6 weeks of intervention Number of patients with o50 Barthel Index score after 6 weeks of intervention Equation 3 Coefficient of the efficacy of Disability Assessment scale score after 6 weeks of treatment ¼ Number of patients with 0 or 1 Disability Assessment scale score after 6 weeks of intervention Number of patients with 3 or 4 Disability Assessment scale score after 6 weeks of intervention Equation 4 and Ganzhou Hospital of Traditional Chinese Medicine, China. Among them, 12 had an orthopedic fracture and 9 had neurological disease(s). Therefore, data of these patients (n=21) were excluded from the analysis. Data of treatment efficacy and scores of the scales of 431 patients were retrospectively collected after obtaining written approval from Institutions. The flow diagram of management of spasticity is shown in Figure 1. Demographic and spasticity characteristics At baseline, there were no significant differences among the groups for the demographic and spasticity characteristics (P40.05 for parameters, Table 2). Patients who had other major disorders, like myasthenia gravis and diabetes mellitus, were taking other medicines together with the study interventions. Outcome measures Muscle tone. At baseline, there were no significant differences for the modified Ashworth scale score among groups ( Supplementary Table S1. Muscle strength. At baseline, there was no significant difference for the modified Medical Research Council score of patients among groups (P=0.182; Figure 3A). At 2 ( Figure 3B), 4 ( Figure 3C), and 6 ( Figure 3D) Table S2). Functional outcomes. At baseline there was no significant difference for the Barthel Index score among groups ( Figure 4A; P=0.606). At 2 weeks, patients of the BAF+ PT group had improved scores compared to the TOL+PT and PT groups ( Figure 4B). At four weeks, patients of the BAF+PT and TOL+PT groups had improved scores compared to the PT group ( Figure 4C). At six weeks, patients of the BAF+PT group had improved scores compared to the TOL+PT and PT groups ( Figure 4D). Within one week, only the BAF+PT group (Po0.0001, q=6.138) had a significant improvement of the Barthel Index score. The details of the Barthel Index score analyses are reported in Supplementary Table S3. Disability Assessment scale score. At baseline and two weeks, there was no significant difference for the Disability Assessment scale score of patients among groups. At four ( Figure 5A) and six weeks ( Figure 5B), patients of the BAF+PT and TOL+PT groups had an Table S4). Coefficient of efficacy A total of 6 weeks after the start of the interventions, the interventions of BAF+PT were tentatively effective, the interventions of TOL+PT were slightly effective, and only PT was ineffective (Table 3). Treatment-emergent adverse effects During the 6 weeks of interventions, the BAF+PT group experienced asthenia, drowsiness, hypoesthesia, paresthesia, sweating, sciatica, vertigo, sleepiness, nausea, amenorrhea, and anorexia as adverse effects. Patients of the TOL+PT group experienced dyspepsia and epigastric pain as adverse effects (Table 4). Discussion The study found that baclofen plus physical therapy and tolperisone plus physical therapy successfully improved the scales' scores at the end of the 6-week interventions compared to physical therapy alone. The results are in agreement with those of a trial performed in an Indian population (1). The Indian trial (1) was with children and adults. However, the current study was with adults (age 418 years) only. Also, the Indian trial (1) administered up to 80 mg/day baclofen to children while the maximum daily dose should not be more than 60 mg/day for children 48 years of age (9). Baclofen is a g-aminobutyric acid b-agonist and is successful in the restoration of the movement and strength of paralyzed muscles because it acts on the central nervous system (15). The membrane-stabilizing action of tolperisone has improved outcome measures of patients suffering from spasticity (16). The physical therapy program that the current study described was inappropriate as it should have included stretching exercises or active range of motion exercises, treatment modalities such as heat/cold therapy, transcutaneous electrical nerve stimulation, electrical stimulation, or functional electrical stimulation (17), which are the other techniques that can be used for the non-pharmacological management of spasticity (1)(2)(3). Physical therapy provides sensory input from the periphery and motor input from the sensorimotor cortex onto the damaged spinal cord, which is not enough for structural re-organization in spasticity (12). The use of other physical therapies that could be useful in spastic patients, for instance, high intensity noisy mechanical stimulation to reduce the monosynaptic reflex (18) and Chinese therapies including acupuncture or electroacupuncture (19) should be considered to improve motor functions of patients with spasticity due to spinal cord injuries. The study found that treatments were tentatively effective in the BAF+PT group and were slightly effective in the TOL+PT group. The results of the coefficient of efficacy of the current study did not agree with those of previous trials (1,20). The reason for the contradictory results was that one trial (1) was performed with a small sample size and the other trial (20) had fewer outcome measures, which can lead to type-I error. The results of adverse effects of the current study agreed with those of previous trials (1,2,15). Tolperisone directly affects the spinal cord and has imitating effects on the neurotransmitters because it inhibits the synaptic influx of calcium ions, which is the efflux of the neurotransmitters. Because baclofen has a g-aminobutyric acid b-agonist action on the brain (1), it has more treatmentrelated adverse effects than tolperisone. Tolperisone has no adverse effects related to sedative manifestation. Oral baclofen has poor acceptance by patients because of adverse effects (15). Concerning treatment-emergent adverse effects, tolperisone plus physical therapy is the choice of treatment for the management of spasticity due to spinal cord injuries. The treatments appeared comparable, with an adverse event profile that favored tolperisone. Tolperisone is recommended to patients who are susceptible to adverse effects related to sedative manifestation. There are several limitations of the study that have to be reported. For example, combined effects of baclofen plus tolperisone plus physical therapy were not evaluated. The follow-up time was of 6 weeks only. The study was a non-randomized retrospective analysis. There was no information available about critical points, such as the lesion's level, the lesion's timing, and type of treatment before the study started. A prospective well-designed study is recommended. An injury in the cervical spine is very different from an injury in the dorsal spine or lumbar spine and, of course, the treatment and the physiological responses to medication are different. The type of injuries (e.g., traumatic or non-traumatic) was not discussed. Retrospective studies are observational, non-randomized studies that are subject to selection bias with no control over confounding variables, which can cause an overestimation or underestimation of the association between specific interventions and treatment effects. The study did not report the number of patients who discontinued the treatment due to a lack of therapeutic effects or side effects related to medication. These variables are highly relevant in studies evaluating the efficacy and safety of drug treatments or interventions. The reported side effects are not systematically recorded in clinical practice (for baclofen/tolperisone and physiotherapy). Therefore, the reported prevalence of side effects and the derived conclusions about the safety of the compared treatments lack validity. The study only included paraplegic patients. Conclusions Baclofen plus physical therapy as well as tolperisone plus physical therapy had a significant role in the improvement of daily activities of patients with spasticity due to spinal cord injuries. However, baclofen plus physical therapy was tentatively effective and tolperisone plus physical therapy was slightly effective. Baclofen had important adverse effects related to sedative manifestation. Supplementary Material Click to view [pdf].
2021-09-09T06:24:37.923Z
2021-09-03T00:00:00.000
{ "year": 2021, "sha1": "3dc7e7dc98d1209a01f8d0947e9c8f75f870b061", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/bjmbr/a/tj5nrsM9YZjgyNG5tyM9Dxv/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b194f5a9081a4612e32b9e9c7c41e9a78063878a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268230254
pes2o/s2orc
v3-fos-license
Nasopharyngeal metatranscriptomics reveals host-pathogen signatures of pediatric sinusitis Acute sinusitis (AS) is the fifth leading cause of antibiotic prescriptions in children. Distinguishing bacterial AS from common viral upper respiratory infections in children is crucial to prevent unnecessary antibiotic use but is challenging with current diagnostic methods. Despite its speed and cost, untargeted RNA sequencing of clinical samples from children with suspected AS has the potential to overcome several limitations of other methods. However, the utility of sequencing-based approaches in analysis of AS has not been fully explored. Here, we performed RNA-seq of nasopharyngeal samples from 221 children with clinically diagnosed AS to characterize their pathogen and host-response profiles. Results from RNA-seq were compared with those obtained using culture for three common bacterial pathogens and qRT-PCR for 12 respiratory viruses. Metatranscriptomic pathogen detection showed high concordance with culture or qRT-PCR, showing 87%/81% sensitivity (sens) / specificity (spec) for detecting bacteria, and 86%/92% (sens/spec) for viruses, respectively. We also detected an additional 22 pathogens not tested for in the clinical panel, and identified plausible pathogens in 11/19 (58%) of cases where no organism was detected by culture or qRT-PCR. We assembled genomes of 205 viruses across the samples including novel strains of coronaviruses, respiratory syncytial virus (RSV), and enterovirus D68. By analyzing host gene expression, we identified host-response signatures that distinguished bacterial and viral infections and correlated with pathogen abundance. Ultimately, our study demonstrates the potential of untargeted metatranscriptomics for in depth analysis of the etiology of AS, comprehensive host-response profiling, and using these together to work towards optimized patient care. Abstract: Acute sinusitis (AS) is the fifth leading cause of antibiotic prescriptions in children.Distinguishing bacterial AS from common viral upper respiratory infections in children is crucial to prevent unnecessary antibiotic use but is challenging with current diagnostic methods.Despite its speed and cost, untargeted RNA sequencing of clinical samples from children with suspected AS has the potential to overcome several limitations of other methods.However, the utility of sequencingbased approaches in analysis of AS has not been fully explored.Here, we performed RNA-seq of nasopharyngeal samples from 221 children with clinically diagnosed AS to characterize their pathogen and host-response profiles.Results from RNA-seq were compared with those obtained using culture for three common bacterial pathogens and qRT-PCR for 12 respiratory viruses.Metatranscriptomic pathogen detection showed high concordance with culture or qRT-PCR, showing 87%/81% sensitivity (sens) / specificity (spec) for detecting bacteria, and 86%/92% (sens/spec) for viruses, respectively.We also detected an additional 22 pathogens not tested for in the clinical panel, and identified plausible pathogens in 11/19 (58%) of cases where no organism was detected by culture or qRT-PCR.We assembled genomes of 205 viruses across the samples including novel strains of coronaviruses, respiratory syncytial virus (RSV), and enterovirus D68.By analyzing host gene expression, we identified host-response signatures that distinguished bacterial and viral infections and correlated with pathogen abundance.Ultimately, our study demonstrates the potential of untargeted metatranscriptomics for in depth analysis of the etiology of AS, comprehensive host-response profiling, and using these together to work towards optimized patient care. INTRODUCTION Acute sinusitis is a bacterial superinfection that occurs usually in children with inflamed mucosa secondary to an upper respiratory tract viral infection (URTI) (1,2).It is one of the most common diagnoses in pediatric primary care settings in the U.S. with 5 million antibiotic prescriptions written annually (3).However, because symptoms of acute sinusitis and an uncomplicated URTI overlap considerably, some children diagnosed and treated for acute sinusitis do not have a bacterial infection (2,3).The diagnosis is especially challenging because the symptoms may be less specific in young children (2).Overtreatment of infections such as sinusitis is thought to be a major contributor to the rise in antimicrobial resistance (AMR), which remains an ongoing threat to public health (1). Recently, it has been suggested that one way to distinguish between bacterial and viral infections would be to obtain samples from the middle turbinate or nasopharynx of children with suspected sinusitis and to test these samples (using culture or qRT-PCR) for the three bacterial pathogens that frequently cause acute sinusitis (6).However, distinguishing bacterial sinusitis from an uncomplicated viral URTI using currently available microbiological tests is challenging for several reasons.First, pathogenic particles from asymptomatic infection or past infections may lead to false positive qRT-PCR detections that have little relevance to the presenting symptoms (7).Second, because many pathogens frequently colonize the nasal passages of children even during health, detecting their presence is often not indicative of the occurrence of a bacterial infection (8,9). With the remarkable reduction in the cost of high-throughput sequencing technologies, sequencing has emerged as an appealing strategy for the detection and taxonomic characterization of microorganisms in clinical samples from patients and has potential to overcome several limitations of currently available methods such as culture or qRT-PCR (10,11).High-throughput sequencing of total RNA from a patient sample (metatranscriptomics) allows for a broad, untargeted approach to detect common, uncommon, and novel pathogens.Pathogen detection by high-throughput RNA or DNA-sequencing is showing promise in a growing number of infectious disease applications including pneumonia (12,13), COVID-19 (14), meningitis (15), and febrile illness (16), and has been effective in identifying potential pathogens causing infection, including cases where no pathogen was detected using qRT-PCR or culture. In addition, a significant benefit of metatranscriptomic sequencing is that it captures both pathogen-derived as well as host-derived RNA, which facilitates both pathogen detection as well as analysis of host gene expression patterns (host response profiling).Whereas sequence-based pathogen detection relies on detecting sequences of known pathogens, host-response profiling may quantify the expression level of biomarkers that indicate active host immune response to infection in a pathogen-agnostic manner.Thus, information on host-response may help distinguish active infections from colonization.Several previous studies have used RNA-seq or microarray techniques to identify and quantify biomarkers that differentiate between viral and bacterial respiratory infections (17)(18)(19)(20)(21)(22).Using 104 host-response genes identified using microarray analysis of blood samples, Tsalik et al. developed separate bacterial and viral infection classifiers that had a combined accuracy of 87% (17).Host-response profiling from blood samples has also formed the basis of commercially available systems (e.g., MeMed BV®).If host-response profiles from a nasopharyngeal (NP) sample can similarly be used to differentiate bacterial from non-bacterial sinusitis infections, this could contribute to the development of biomarker assays that inform clinical decision making regarding the use of antibiotics. In this work, to examine the ability of metatranscriptomics to uncover microbiological and clinically relevant information, we performed metatranscriptomic analysis of NP swabs from 221 children with clinically diagnosed acute sinusitis who were a subset of children enrolled in a previously described clinical trial (6).Through RNA-seq analysis of NP swab samples, we performed metatranscriptomic pathogen detection and assessed its ability to reproduce culture and qRT-PCR results for 3 bacteria and 12 viruses.We then assembled partial to complete genomes of 205 viruses.Finally, we performed host-response profiling and identified gene expression signatures of bacterial and viral infection, which correlated significantly with pathogen load.Our work shows the potential of metatranscriptomics for improving diagnosis of sinusitis and upper respiratory tract infections. Cohort characteristics: A subset of 221 pediatric patients presenting with symptoms of acute sinusitis from a previous study (6) (Feb 2016 to Mar 2022) were selected for NP RNA-seq (Fig. 1, Table 1).Further details are provided in the Methods and in Shaikh et al. (6).One naris was sampled using a NP swab and this was used for viral qRT-PCR, bacterial culture, and RNA-sequencing (23); 171 (77 %) and 169 (76%) of the children tested positive for at least one bacteria or virus, respectively.Parents assessed symptom severity daily during the 10 days following diagnosis.Haemophilus influenzae isolates were tested for beta-lactamase production (N=69).Parallel to this, RNA extraction from NP swabs and sequencing was also done to conduct metatranscriptomic analysis using a bioinformatics approach.Using the sequencing data, several analyses were performed: pathogen detection and quantification, assembly of detected respiratory viruses, detection of beta-lactamase genes, and transcriptomic analysis of host responses. Bacterial pathogen detection by metatranscriptomic analysis of NP samples: To identify potential bacterial and viral pathogens in the 221 samples, we performed highthroughput sequencing of total RNA derived from NP swabs.First, we aimed to quantify the abundance of three bacterial pathogens of interest --S.pneumoniae (SPN), M. catarrhalis (MCAT), and H. influenzae (HFLU) --as these pathogens are commonly isolated in children with bacterial sinusitis 4 .We note that our use of the term "pathogen" does not imply that these organisms are necessarily the causative agents of sinusitis infections.After quality filtering, we performed taxonomic classification of the sequencing reads using Kraken 2 (24).The relative abundance of the three bacterial pathogens (shown in Fig. 2A) was calculated based on the normalized abundance of reads (reads per million, RPM) that mapped to each species.One or more of these three bacterial pathogens were detected in a total of 177 patients (80%).Two or more bacterial pathogens were detected in 89 (40%) patients, and 25 (11%) of patients had all three bacterial pathogens detected.On an individual basis, SPN was detected in 73 (33%), MCAT in 137 (62%), and HFLU in 81 (37%) of patient samples.Tables S1 and S2 contain the clinical culture and RNA-seq based results for bacterial detection for each patient. Next, we examined the extent that the calculated abundance of these bacterial pathogens from RNA-seq agreed with their presence/absence based on culture.For all three pathogens, we detected a significant increase in RNA-seq abundance in those with a positive culture, demonstrating concordance between the metatranscriptomic data and culture (Fig. 2B).Some pathogen-negative samples based on culture had an RNA-seq pathogen abundance greater or equal to the mean abundance seen in positive samples.We then assessed the ability of the RNA-seq data to predict the culture-based test results for each pathogen, and generated Receiver Operator Curves (ROCs) by varying the detection threshold (Fig. 2C).HFLU infections could be detected with the highest accuracy by RNA-seq with an area under the ROC curve (AUC) of 0.95, SPN infections with an AUC of 0.89, and MCAT infections with an AUC of 0.82.Using a threshold of 3 reads per million, HFLU was detected with a sens/spec of 94%/90%, SPN with 81%/89% and MCAT with 85%/64% (Table 2). Beta-lactamase gene detection in HFLU positive samples: We next examined whether metatranscriptomics could identify potential resistance genes associated with HFLU.Culture-based tests for beta-lactamase were performed for all HFLUpositive samples, and these were used as the reference standard to analyze the accuracy of RNAseq based detection.We assembled all non-human reads from samples that were clinically positive for HFLU (N=69) and used the Comprehensive Antibiotic Resistance Database (CARD) (25) to detect beta-lactamase genes with at least 10% coverage (fig.S1).Beta-lactamase genes were detected in 74% (20/27) of the samples associated with resistant HFLU, and in 33% (13/42) of the samples associated with non-resistance HFLU, which reflects a significant (2.1-fold) increase in detected beta-lactamase genes in the resistant samples (p = 0.002, Fisher exact test).The imperfect concordance between RNA-seq based and culture-based beta-lactamase detection reflects the known challenges in detecting AMR genes using metagenomic approaches (26).The complete list of genes and the portion of the reference genome detected for each hit can be found in tables S3-S5. Next, we examined the extent that the RNA-seq based predictions matched viral presence/absence based on the qRT-PCR.As shown visually in Fig. 3A, the relative abundance of viruses detected by metatranscriptomics was in strong agreement with the results of qRT-PCRbased tests, with lower qRT-PCR cycle threshold (Ct) values corresponding to higher RPM values in RNA-seq.A significant correlation (r = 0.75, p = 1.3x 10 -46 ) was detected between 1/Ct values and viral load calculated as log10(reads per kilobase million, rpkm) (27) (Fig. 3B).Samples containing viruses detected by qRT-PCR but not by RNA-seq had significantly higher cycle thresholds (mean = 34.7)compared to true positives (mean = 23.2;t-test p-value = 5.5 x 10 -5 ), which has been reported in previous RNA-seq studies (28).For all viruses except for INFC (which only had 8 positive samples), we detected an increase in metatranscriptomic abundance in those with a positive qRT-PCR result (Fig. 3C). We then calculated the accuracy of viral detection by using the results of the qRT-PCR tests as the ground truth.Due to the uniqueness of viral sequences, we found that a very low threshold (>=1 RPM) was sufficient to distinguish virus-positive from negative samples.Using this threshold, we calculated the sensitivity and specificity of metatranscriptomic pathogen detection for each of the 12 viruses as shown in Table 2. Nine out of the 12 viruses were detected with 90-100% sensitivity and specificity, while INFC, HRV, and ADV were detected with lower accuracy.Overall, we were able to detect the 12 viruses with an average sensitivity/specificity of 86%/92%.These accuracies are consistent with other studies performing sequencing-based pathogen detection using NP samples (27,28). RNA-seq uncovers additional pathogens and alternate explanations of disease etiology: By sequencing total RNA within a sample, metatranscriptomics has the potential to detect additional pathogens beyond those tested by culture or qRT-PCR.We therefore screened our RNA-seq dataset for additional pathogens previously associated with URTIs and/or sinusitis infections, as well as non-URTI pathogens and opportunistic pathogens, and further validated the identified species using additional bioinformatic approaches (see Methods).Across the 221 patient samples, we detected 22 additional pathogens that were not tested for clinically, including 11 bacteria and 11 viruses (Fig. 4).These species were then ranked in terms of their maximum relative abundance within a sample (Fig. 4). Newly identified bacterial pathogens includes fourteen species listed in Fig. 4. The most notable identifications include Mycobacterium pneumoniae and Chlamydia pneumoniae, which were not included in the clinical panel but have been previously implicated in pediatric sinusitis and URTIs (29,30).In addition, opportunistic pathogens including Fusobacterium nucleatum, Moraxella spp., and others, were also detected (Fig. 4), but some of these likely have a commensal role in the nasopharynx.Interestingly, we also detected periodontitis-associated bacteria, Treponema medium, Prevotella intermedia, and Tannerella forsythia (31), in a few (N = 1 to 4) samples, and all three co-occurring in the same patient.Follow-up investigation of this patient revealed that they were admitted to an emergency room with a severe tooth infection one year after the NP swab sample was taken, highlighting the potential of NP RNA-seq to detect subclinical infection. Newly identified viral pathogens with the highest abundance include four human coronaviruses known to cause upper respiratory infections (NL63, OC43, HKU1, and 229E).We also detected parechovirus A and cardiovirus B (saffold virus), which have been associated with respiratory illness in children (32,33), as well as other viruses that are not typically associated with respiratory infections including mamastrovirus 9, enteroviruses A and B, human gammaherpes virus 5, human betaherpes virus 5, and sequences related to murine leukemia virus (Fig. 4). Of the 19 samples that had no pathogen detected by culture or qRT-PCR, 11 contained identified pathogens based on RNA-seq profiling.Three of the 11 samples (circled in Fig. 4) contained known pathogens detected at high abundance that were not included in the clinical pathogen panel: the coronaviruses NL63 and 229E, and the bacterium, Chlamydia pneumoniae.Eight of the 11 samples had pathogens detected by RNA-seq but not by qRT-PCR or culture, including influenza B (N = 1), parainfluenza virus 1 (N = 1), SPN (N = 1), MCAT (N = 4), and HFLU (N = 1). Ultimately, these additional detected pathogens highlight the ability of RNA-seq to provide a more complete picture of the microbiome and virome present in acute sinusitis samples and suggests an expanded panel of viruses and bacterial pathogens to be used in future clinical workflows. Viral genome assembly and subtyping from host-derived metatranscriptomes: Read-based taxonomic classifications provide an estimate of microbial species present in each sample.However, de novo genome assembly methods may be used to assemble longer fragments including genomes of full-length RNA viruses, which can validate read-based predictions and reveal additional information. By aligning the RNA-seq reads to reference genomes of identified viruses, we were able to assemble partial to complete genomes for a total of 205 viruses across 163 samples, including 25 different human pathogenic viruses (Fig. 5A).In addition to the 12 viral groups from the clinical panel (Fig. 3), genomes were assembled for 9 additional respiratory viruses (e.g., coronaviruses) not tested for clinically.We also assembled genomes of enterovirus A and B, WU polyomavirus, and mamastrovirus 9, which are typically implicated in other illnesses such as gastroenteritis.A total of 31 (15%) were 100% complete, while 60 (30%) had completeness >90% (table S7).All assembled viral genomes were phylogenetically verified by sequence comparison to related genomes in NCBI through BLAST, with average nucleotide identities (ANIs) ranging from 95-100%. To explore the use of assembled genomes for viral subtyping, we focused on the predictions for influenza A and B, since these were subtyped clinically using qRT-PCR.The subtyping results using assembled influenza genomes showed excellent agreement with the clinical results, with Influenza A subtypes H1N1 and H3N2 having 100% (15/15) agreement and Influenza B subtypes Yamagata and Victoria having 82% agreement (9/11) with qRT-PCR results (table S8). We then focused on several cases of interest, performing a deeper genomic and phylogenetic analysis of newly assembled genomes.Three examples of assembled viral genomes are shown in Fig. 5B, including a genome of a novel HCoV-OC43 strain, an RSVB genome, and an enterovirus D68 genome.All three of these genomes have distinct mutation profiles from other strains in the NCBI database (Fig. 5B), and clustered as unique strains in phylogenetic analysis (Fig. 5C).All three of the genomes also showed broad sequencing coverage across the genome, with the exception of the RSVB genome from sample 1141, which showed a lack of coverage spanning the glycoprotein G gene.Interestingly, a previous study also identified G protein deletion mutant RSV strains in pediatric pneumonia patients from South Africa (34). Host-response expression profiles distinguish bacterial from viral infections: Although RNA-seq analysis was capable of detecting pathogens directly from reads, most reads within RNA-seq samples were host (human) derived, ranging from 64.7-99.9%,which enables host-response profiling to potentially identify host biomarkers and immune responses associated with disease etiology (35)(36)(37)(38). To identify differentially expressed genes (DEGs) associated with bacterial versus viral infections, we compared host gene expression profiles of patients with bacterial pathogens to those with viral pathogens based on clinical diagnostic testing (Fig. 6A).Due to the presence of many (N = 138) complex samples containing a mixture of viral and bacterial pathogens, we chose to simplify the initial comparison and compared samples with only bacterial pathogens (N = 33) to those with only viral pathogens (N = 31), but subsequently analyzed all 221 samples.A total of 821 significant DEGs were detected with q < 0.001, of which 548 genes had increased expression in bacterial-positive patients and 273 genes had increased expression in viral-positive patients (Fig. 6A, table S9).We termed these genes as "bacterial upDEGs" and "viral upDEGs". In general, representative viral and bacterial upDEGs had lower expression levels for samples in which no bacteria or virus was detected by qRT-PCR/culture, and higher expression levels for samples containing both a virus and bacterial pathogen (Fig. 6D).Interestingly, there are several exceptions to this pattern including four samples that had a strong antiviral response despite there being no virus detected by qRT-PCR/culture.Deeper investigation of these samples by RNAseq revealed that three of them contained respiratory viruses (two coronaviruses and influenza B) (Fig. 4B) that were not detected by the qRT-PCR tests.Other exceptions include two samples which had no bacterial pathogen detected by culture/qRT-PCR but had a strong antibacterial response.One of these samples (sample 1303) had a bacterial pathogen (MCAT) identified in high abundance by RNA-seq.These results suggest that host-response profiling may provide an indication of viral or bacterial infection when traditional tests fail to detect a pathogen. Magnitude of host responses correlates with viral and bacterial pathogen abundance: If the identified viral and bacterial upDEGs are genuine biomarkers of viral and bacterial infections, respectively, then their levels of expression should correlate with the abundance of viral and bacterial pathogens estimated from RNA-seq.To test this hypothesis, we calculated the total bacterial pathogen abundance as the sum of the relative abundance of the pathogens SPN, HFLU, and MCAT.We then binned all samples into ten groups, with group 1 having the lowest bacterial pathogen abundance, and group 10 having the highest.We then repeated this analysis for viral pathogens, summing the total abundance of 12 viral pathogens as well as the coronaviruses that were clearly present based on RNA-seq data, but missing from the clinical test. As shown in Fig. 7A, with increasing abundance of bacterial sinusitis pathogens (MCAT, SPN, HFLU), there is a clear increase in expression levels of bacterial upDEGs.To quantify this pattern, for each sample we calculated the "magnitude" of the bacterial and viral host response as the average expression level (Z-score) of the bacterial and viral upDEGs.As shown in Fig. 7B, the magnitude of bacterial host response correlated significantly with bacterial pathogen abundance (Pearson r = 0.50, two-tailed p = 1.6 x 10 -15 ).The same pattern was also seen for viruses: that is, the abundance of viral pathogens also correlated significantly with the magnitude of viral hostresponse (Pearson r = 0.33, two-tailed p = 5.8 x 10 -7 ) (Fig. 7C,D).Both the bacterial and viral host responses however did not correlate with other clinical features including the duration of cold symptoms and symptom severity (Fig. 7A).Although these pathogen-host-response correlations are a general pattern, not all samples display this trend.For example, several samples with high bacterial pathogen abundance lack a strong bacterial host response.In addition, one outlier (marked * in Fig. 7A) shows an individual with a low detected bacterial pathogen abundance but a strong bacterial host response.This could indicate an immune response to an unknown bacterial species. In addition to the association between host-response and pathogen abundance, we also tested for host-response correlations with other clinical metadata.A weaker but significant (r = 0.33, p = 6.6 x 10 -7 ) host-response pattern was detected between a subset of genes and patient symptom severity scores (PRSS) at the time of diagnosis.A total of 45 genes were differentially expressed as a function of PRSS, which subdivided into 2 expression clusters (fig.S2).Cluster 1 was positively correlated with PRSS and includes the following genes: METTL7B, MMP3, PRF1, GNLY, MMP1, FPR3, GIMAP6, OLFML2B, DESI1, IL12RB2.Function enrichment analysis revealed that cluster 1 was associated with a response to infection (cellular defense response, natural killer cell mediated immunity, and cellular response to cytokine stimulus).Other pathways such as proteolysis and pyroptosis are also involved in innate host immune response by eliminating and degrading infected cells (39,40). RNA-seq classifies patients into distinct groups with unique pathogen-host response profiles: After examining host responses to bacterial and viral infections individually, we considered how bacterial and viral relative abundance together impact host responses within patients.To investigate this, we used the RNA-seq abundance to bin samples into four groups: those with low bacterial / low viral pathogen abundance (N = 60, 27%), high viral / low bacterial pathogen abundance (N = 51, 23%), high bacterial / low viral pathogen abundance (N = 51, 23%), and high bacterial / high viral pathogen abundance (N = 59, 27%).Here, the thresholds of "high" and "low" pathogen abundance based on RNA-seq estimated levels (>=60 th percentile) and not the presence/absence classification obtained from qRT-PCR and culture-based testing. The four groups of patients display distinct host response signatures (Fig. 7E,F).As expected, samples with low bacterial and low viral pathogen abundance tend to have weak bacterial and antiviral responses (Fig. 7E).Samples with high viral abundance but low bacterial abundance display a strong antiviral pattern and a weak bacterial response.Samples with high bacterial pathogen abundance but low viral pathogen abundance are associated with a strong bacterial host response, and samples with high bacterial and viral pathogen abundance show both host responses.Again, there are several outliers that are exception to these general trends.The viral host response for individuals with both bacterial and viral pathogens was lower than the viralonly group (p = 0.01), and the bacterial host response for individuals with both bacterial and viral pathogens was not significantly different from the bacterial-only group (p = 0.82).Finally, we tested whether the bacterial and viral host-response magnitude alone could predict samples with high pathogen abundance, with pathogen abundance defined as described above using RNA-seq measurements.The bacterial host response magnitude predicted high-bacterial samples with an AUROC of 0.79, and the viral host response magnitude predicted high-virus samples with an AUROC of 0.80.If sensitivity is desired over specificity, high-bacterial samples could be predicted with a sensitivity/specificity of 80%/68% using host-response information alone.Ultimately, these analyses suggest that host-response information alone may have diagnostic value in differentiating between viral and bacterial sinus infections, especially when the relative abundance (pathogen load) is high. DISCUSSION In this study, we performed metatranscriptomic analysis of 221 NP samples from children with clinically diagnosed acute sinusitis.Prior to this work, there has been a lack of research evaluating the use and applications of RNA-seq profiling in this clinical context.Our study provides several research contributions.First, it highlights the ability of RNA-sequencing of clinical samples to accurately identify bacterial and viral pathogens associated with sinusitis infections and URTIs.Second, it provides an original dataset to assist with the development of future bioinformatic approaches for infectious disease profiling, including hundreds of assembled viral pathogen genomes contributing to ongoing pathogen genomic surveillance efforts.Third, it identifies host-response signatures of bacterial and viral infections in sinusitis, which could serve as the basis for the development of biomarker assays to be used in future clinical workflows that optimize delivery of care. Using RNA-seq we achieved an overall sensitivity of 87% and specificity of 81% in reproducing the clinical results for detection of three bacterial species that are mostly commonly implicated in sinusitis (4).RNA-seq also demonstrated a significant ability to detect viral pathogens that were also detected by the qRT-PCR panel (average sens/spec of 86%/92%), as well as predict viral load (Ct value).These accuracies are comparable to results obtained by previous studies using NGS for pathogen detection in NP samples (27,28). For clinical decision making regarding antibiotic treatment, a key goal of sequencing-based approaches is to not only detect the pathogen of interest but also its antimicrobial genes, which can be especially challenging in mixed metagenomic samples.As proof of principle, we focused on beta-lactamase resistance in HFLU isolates, which represents a key clinical issue (41,42).As done previously for pediatric nose and ear samples (43) we used CARD (25) to identify betalactamases in RNA-seq data.This RNA-seq workflow was able to correctly detect beta-lactamase genes in 67% of the resistant HFLU isolates, with a specificity of 96%.Additionally, beta-lactam resistance SNPs in the Haemophilus influenzae PBP3 gene were also detected in several samples, which may represent an additional resistance mechanism that was detected by RNA-seq profiling but not covered by clinical AMR testing. Finally, through de novo assembly methods, we were able to assemble genomes of 205 viral pathogens with varying degrees of completeness.Assembled genomes confirm read-based predictions and provide added information that cannot be obtained from short sequencing reads or qRT-PCR-based methods.For example, phylogenetic analyses of some of these viruses (e.g., HCoV-OC43, RSV B, enterovirus D68) revealed unique differences from closely related genomes in the database, suggesting that they represent distinct strains.A potentially relevant mutation (absence of large segments of the G gene) was identified in an RSV B strain similar to previous reports (34).Further analysis of RSV genomes from patient samples is needed to determine the frequency of G deletion mutants, which could be important information to consider for RSV vaccine design. An advantage of metatranscriptomic RNA-seq over culture or qRT-PCR is the ability to perform a broad and untargeted analysis to detect any species whose genome is available in the reference database, which theoretically improves sensitivity of pathogen detection and discovery.Out of 221 pediatric sinusitis patients tested, 19 did not have any bacterial or viral pathogen detected by culture-based or qRT-PCR testing.RNA-seq identified plausible pathogens for acute sinusitis in 11 of these 19 samples including cases of influenza B and PIV1 that were missed by qRT-PCR.Not surprisingly, several new pathogenic bacteria and viruses were also detected in these samples and were verified by genome assembly and phylogenetics.These included two coronaviruses (NL63 and 229E), as well as the bacterium, Chlamydia pneumoniae.Other identified organisms included commensal organisms of the nasal microbiome and opportunistic pathogens that may or may not play a direct role in sinusitis (e.g., different species of Moraxella and Corynebacterium).Clarifying the role of these and other species in sinusitis etiology is a challenging goal for future work. One of the most exciting aspects of this study is the identified host-response gene expression patterns associated with bacterial and viral sinusitis infectious.Since the pathogen composition of our patient cohort was complex including a large number of samples containing both bacterial and viral pathogens based on culture/qRT-PCR, we chose to simplify the initial comparison between virus-positive only samples versus bacteria-positive only samples.This enabled the detection of virus associated and bacteria associated host DEGs ("viral host response" and "bacterial host response") that formed the basis of subsequent analyses.Remarkably, the magnitude of these host responses correlated significantly with the total abundance of bacterial or viral pathogens detected in the samples.Importantly, this correlation between pathogen abundance and host-response magnitude was only identified for a limited subset of bacterial species (those previously identified as sinusitis pathogens, MCAT, SPN, HFLU) and respiratory viruses, and the correlation was absent when examining other species detected in the data that may reflect commensal organisms.This finding indicates that the relative abundance of specific bacterial and viral species within the nasopharynx is a determinant of the strength of the host immune response.This is consistent with immunology since the expression of host antiviral and antibacterial pathways are dependent on the levels of viral (e.g., dsRNA) and bacterial pathogen-associated molecular patterns (e.g., lipopolysaccharide) sensed by the host immune system.Previous studies have also reported a correlation between antiviral host responses in RNA-seq and viral load (44)(45)(46).However, our study is unique by analyzing the interplay between a complex mixture of bacterial and viral pathogens and their impact on the host transcriptomic response. Although traditional methods (culture and qRT-PCR) provided a simple classification of our samples based on detected presence/absence of a pre-defined set of pathogens, metatranscriptomic data enabled a more holistic classification based on pathogen abundance and host-response information (Fig. 7).When taking both pathogen abundance and host-response information into consideration, the samples could be subdivided into four main groups: those with a "low" abundance of bacterial or viral pathogens which tend to lack a host-response, and those with a "high" abundance of bacterial pathogens, respiratory viruses, or both, which tend to show the expected host responses.Interestingly, the observed correlation between pathogen abundance and host-response is not perfect; there are several outlier samples which exhibited a strong hostresponse pattern and yet lack a detected pathogen, and other samples which contained a high pathogen abundance but lack a detectable host response.For the former category, it is possible that those samples contained other pathogens that were not included in our pathogen panel, which may include opportunistic infections by commensal organisms for example.For the latter category, these cases could indicate delayed host-responses in patients at the time of sampling, shedding of viral RNA at a post-infection time point which may be associated with a reduced host-response, or simply an imperfect correlation between host-responses and pathogen abundance.Nevertheless, future research focusing on host-responses of patients with infectious disease and factors that account for discrepancies between detected pathogen abundance could clarify mechanistic understanding of disease etiology. There are several limitations of our study that could account for variation in the results obtained.First, the classification into viral and bacterial infection was inferred based on the presence/absence of bacterial and viral pathogens, but some of these organisms may be present as commensals and their presence alone does not necessitate an infection (47,48).Second, the enrollment criteria for this study recruited patients experiencing symptoms for at least 6 days when sampled.Since peak shedding of some viruses can occur within 48 hours of symptom onset, the chosen sampling time may have led to a reduced sensitivity of viral detection as well as lower coverage for genomes assembled.Variation in the timing of bacterial infections could also impact sensitivity of bacterial detection by RNA-seq.Third, our sensitivity for pathogen detection by RNA-seq is dependent on the depth of sequencing.Deeper sequencing may have been necessary to detect viruses, for example, that were false negatives by RNA-seq but were detected using qRT-PCR.DNA viruses in particular (e.g., adenoviruses) may have been more prone to weak detection due to the use of RNA-seq over DNA-seq.Future studies that employ both metatranscriptomic and metagenomic sequencing with repeated time-series sampling of patients may overcome some of the limitations described above.Nevertheless, the current study provides a starting framework for exploring the use of high-throughput sequencing of patient samples to uncover etiology and hostresponse in pediatric sinusitis and other upper respiratory infections. Study design and description of the cohort: Between February 2016 and April 2022, 510 children 2 to 11 years of age (inclusive) with clinically diagnosed acute sinusitis were enrolled in a randomized multicenter double-blind trial (ClinicalTrials.govnumber, NCT02554383).Exclusion criteria have been previously described (6).Children were recruited from 6 outpatient centers.Children were randomly assigned to receive 10 days of amoxicillin-clavulanate or matching placebo.A total of 204 patients did not have a NP sample collected, or their sample was not preserved in RNA buffer and were excluded.Of the remaining 306 patients' samples, 61 were not sequenced due to low RNA yield.Although 245 samples underwent RNA-sequencing, batch 1 was prepared with a different kit/protocol and when analyzed displayed a strong batch effect and was thus removed, leaving 221 patients.The primary outcome, symptom burden, was assessed by having parents complete the Pediatric Rhinosinusitis Symptom Scale (PRSS) electronically every evening on Days 2 to 11.As previously described (6) the PRSS is a validated scale that assesses symptoms of sinusitis. Culture and sensitivity pattern of bacterial pathogens: We collected NP swabs from all children at study entry.As previously described (23), the tip of the swab was cut, placed in DNA/RNA shield (Zymo, R1100), and transported on ice to the lab.The remainder of the swab was placed into Amies transport medium and transported on ice to the Clinical Laboratory at UPMC Children's Hospital of Pittsburgh within 48 hours and plated on blood and chocolate agars.Identification of SPN, HFLU, and MCAT on culture was accomplished using standard microbiological techniques.HFLU isolates were tested for the beta-lactamase production using a cefinase disk. qRT-PCR for viral co-infection: Using an aliquot of Amies transport media plus MagMax lysis/binding buffer, nucleic acid extraction was performed for viral identification using the ABI MagMax96 Express automated instrument and the MagMax 96 Viral Isolation Kit (Thermo Fisher, AMB 18365) (23,49).Adenovirus, influenza subtypes A/B/C, human metapneumovirus (HMPV), human rhinovirus (HRV), parainfluenza virus (PIV) subtypes 1-4, Enterovirus D68, and respiratory syncytial virus (RSV) were tested for using individual real-time qRT-PCR assays.A Ct threshold of 40 was used for all viruses and positive and negative controls were included in each run. RNA-seq library generation, sequencing, and data processing: RNA was assessed for quality using a Fragment Analyzer 5300 and RNA concentration was quantified on a Qubit FLEX fluorometer.Libraries were generated with either the Illumina TruSeq Stranded Total RNA prep (20020599) or the Illumina Stranded Total Library Prep kit (Illumina: 20040529) according to the manufacturer's instructions, after using the Illumina Ribo-Zero Plus rRNA Depletion Kit (20037135).Batch 5 was additionally treated with the Illumina Ribo-Zero Plus Microbiome rRNA Depletion Kit (20072062).For library generation, 100 ng of input was used for the Illumina TruSeq Stranded Total RNA protocol with 15 cycles of indexing PCR, and 20-100 ng of RNA input was used for the Illumina Stranded Total Library Prep protocol with 15 cycles of indexing PCR for 100ng of RNA input and 17 cycles of indexing PCR for input RNA ≤100 ng.Library quantification and assessment was done using a Qubit FLEX fluorometer and the Fragment Analyzer 5300.Libraries were normalized and pooled to 2 nM by calculating the concentration based off the fragment size (base pairs) and the concentration (ng/µl) of the libraries.Sequencing was performed on an Illumina NextSeq 2000, using a P3 200 flow cell with sequencing read lengths of 2x101bp, with a target of 40 million reads per sample.Sequencing data was demultiplexed by the Illumina on-board DRAGEN FASTQ Generation software.Library generation and sequencing was performed by the University of Pittsburgh Health Sciences Sequencing Core (HSSC), Rangos Research Center, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania, United States of America.Fastp v0.23.1 (50) was used for quality trimming and adapter removal on default parameters.FastQC v0.11.9 (49) and MultiQC v1.12 (51) were used to check the quality of all sequence files before and after processing to ensure data was ready for analysis. Taxonomic classification of RNA-seq reads for detection of bacterial and viral pathogens: Taxonomic classification of sequencing reads was performed using Kraken 2 v2.1.2(24) with default parameters.The PlusPF database dated 9/8/2022 (https://benlangmead.github.io/awsindexes/k2) was used with Kraken 2, which was originally built from NCBI RefSeq archaeal, bacterial, viral, plasmid, human, UniVec_Core, protozoan, and fungal sequences.A Kraken 2 detection threshold of 3 reads was used for bacterial species (selected based on F1 score optimization), while no threshold was used for viruses.New pathogens identified by Kraken 2 but not included in the clinical panel were further validated using BLAST (52), MASH (53) and metAnnotate (54), focusing on samples associated with the largest estimated abundance for each pathogen. The normalized abundance of each taxon was calculated as the number of reads per million (RPMs).Relative abundance heatmaps were generated using R v4.2.1 and the pheatmap package.For display, log10(RPM + 1) values were used to avoid log(0) errors.Receiver operator curves were also generated in R and the area under the curve was computed using the pROC package.Pathogen abundance jitter plots and top species plots were generated using ggplot2 in R (55). Viral load was estimated from RNA-seq data following the method of Graf et al (27).The number of detected reads for a virus was divided by the total number of reads in the sample and the size of the respective viral genome in kilobases, and then multiplied by 1 million to generate an RPKM value (reads per kilobase of reference sequence per million total sequencing reads). Detecting beta-lactamase genes using RNA-seq: For the samples that were positive for HFLU based on culture tests, sequencing reads classified as non-human by Kraken 2 were extracted using extract_kraken_reads.py and assembled into contigs using the rnaSPAdes v3.15.4 with default parameters (56).Using CARD resistance gene identifier (RGI) software v6.0.1 (25) and default database, the contigs were analyzed with the 'main' function of the RGI tool with the 'low-quality' and 'include-nudge' parameters.The results were filtered to keep "strict" or "perfect" hits to beta-lactamase genes, genes acting on antibiotics belonging to the penam drug class, and hits with at least 10.0% sequence coverage to the reference gene. Viral genome assembly and phylogenetic analysis: RefSeq genomes for all viruses of interest were downloaded from NCBI.Non-human reads were mapped to viral genomes using BBMap v38.86 (57) to create .bamfiles.The consensus sequence for each sorted mapping result was produced using samtools v1.16.1 with the '-a' option.A python script was used to calculate whole genome coverage relative to the RefSeq viral genome.Genome coverage was considered complete if >= 99.5%.FastANI v1.32 was used to calculate the average nucleotide identity to the closest reference genome for each genome assembled. Complete viral genomes were queried against the complete NCBI non-redundant nucleotide database using BLAST (52).Up to 35 top matching sequences were downloaded and aligned to the assembled genome using the MUSCLE algorithm (58).The multiple genome alignment was used to generate a phylogenetic tree with FastTree v2.1.10 (59), and FigTree v1.4.4 was used for tree visualization. Statistical analysis: Differentially expressed genes (DEGs) were detected by comparing samples positive for viruses only versus samples positive for bacteria only based on culture or qRT-PCR testing.In the design formula for the 'DESeqDataSetFromTximport' function, we also controlled for potential confounding variables "batch number", "sex", and "age (scaled)".Log2 fold changes and adjusted p-values (q-values) were calculated for all genes, and a significance threshold of q <= 0.05 was used to identify DEGs.Function enrichment analysis of genes with significantly increased expression in the viral and bacterial groups was performed using EnrichR (accessed June, 2023) (62) with the GO Biological Process 2021 ontology and an FDR threshold of 0.05. Fig. 1 . Fig. 1.Overview of study design.The study cohort was comprised of 221 children with acute sinusitis who underwent collection of NP swabs.Culture was used to detect three bacterial species (Haemophilus influenzae, Streptococcus pneumonia, Moraxella catarrhalis) and qRT-PCR was used to detect 12 viruses of clinical relevance.Haemophilus influenzae isolates were tested for beta-lactamase production (N=69).Parallel to this, RNA extraction from NP swabs and sequencing was also done to conduct metatranscriptomic analysis using a bioinformatics approach.Using the sequencing data, several analyses were performed: pathogen detection and quantification, assembly of detected respiratory viruses, detection of beta-lactamase genes, and transcriptomic analysis of host responses. Fig. 2 . Fig. 2. Metatranscriptomic detection of bacterial pathogens in NP samples from children with clinically diagnosed acute sinusitis.(A) Heatmap showing the detected abundance of three bacterial pathogens (H.influenzae, M. catarrhalis, S. pneumoniae) in patient metatranscriptomes.For each bacterium, the culturebased test result (positive -grey, negative -white) is shown on the left of the column, and the estimated RNA-seq abundance is depicted on the right of the column as a color gradient (absent -white, low -yellow, high -dark blue).Each row in the heatmap and tip in the hierarchical tree corresponds to an individual patient sample.(B) Boxplots depicting pathogen abundance in positive (+) versus negative (-) samples (labeled on X axis) defined based on culture.The boxes show the interquartile range and median line, and the whiskers show the variability extending to the furthest data points within 1.5 times above and below the interquartile range.Outliers outside of these ranges are shown as data points.(C) ROC curves illustrating specificity and sensitivity of metatranscriptomic pathogen detection with area under the curve (AUC) values displayed above. Fig. 3 . Fig. 3. Detection of common respiratory viruses in NP metatranscriptomes.(A) Abundance heatmap for viruses detected in NP metatranscriptomes for 221 patients.For each virus, the qRT-PCR result is shown on the left of the column as a color gradient (negative -white, high to low cycle threshold values -light gray to black), and the estimated RNA-seq abundance is depicted on the right of the column as a color gradient (absentwhite, low -yellow, high -dark blue).Each row in the heatmap and tip in the hierarchical tree corresponds to an individual patient sample.(B) qRT-PCR abundance (1/ cycle threshold) versus metatranscriptomic viral load (log10 of the RPKM).The estimated viral load from RNA-seq is significantly correlated with 1/Ct value from qRT-PCR.(C) Metatranscriptomic abundance of respiratory viruses in negative (-) versus positive (+) samples (labelled on X axis) defined by qRT-PCR test result.The boxes show the interquartile range and median line, and the whiskers show the variability extending to the furthest data points within 1.5 times above and below the interquartile range.Outliers outside of these ranges are shown as data points. Fig. 4 . Fig. 4. Metatranscriptomics of NP samples from children with acute sinusitis identified organisms not detected by qRT-PCR or culture.The organisms included in the heatmap are a subset of the full set of organisms detected by RNA-seq that exceed minimum abundance thresholds and include human pathogenic bacteria and viruses (see table S6 for full dataset).The organisms are sorted vertically based on their maximum relative abundance within a sample (across 221 samples).The heatmap displays the relative abundance of each organism in each sample as estimated by Kraken 2. The left heatmap includes samples with clinically identified pathogens by qRT-PCR or culture (N = 202), and the right heatmap includes 19 samples without a pathogen detected by qRT-PCR or culture.For the latter samples, several samples contain additional organisms identified by metatranscriptomics that are plausible causes of sinusitis.The barplot on the right depicts the total number of samples containing each detected pathogen. Fig. 5 . Fig. 5. Assembled genomes of viruses from children with clinically diagnosed acute sinusitis.(A) Bar graph depicting the number of assembled genomes for various species of respiratory viruses across the full dataset (N = 205 total viruses assembled from 163 samples).(B) Read pileups for three selected samples showing sequencing reads mapped to reference genomes of human coronavirus (HCoV) OC43, RSV, and enterovirus D68 with SNP profiles as colored lines.(C) Phylogenetic analysis of three assembled viral genomes and their top 25 closest matching complete genomes from BLAST.Each newly assembled virus (red) is a unique strain that clusters as a distinct branch within its phylogenetic tree. Fig. 6 . Fig. 6.Identification of differentially expressed host genes indicative of host-responses to bacterial and viral infection in acute sinusitis patients.(A) Volcano plot of differentially expressed genes between samples with only bacterial pathogens and samples with only viral pathogens according to qRT-PCR and culture test results.Human genes shown in the upper right quadrant have significantly increased transcript abundance in samples with bacteria (bacterial upDEGs), and genes in the upper left quadrant have significantly increased transcript abundance in samples with virus(es).Genes are partitioned in the plot based on p-value significance thresholds.(B and C) Biological functions and pathways that are significantly enriched among bacterial and viral upDEGs, calculated using enrichR.For each function term, the associated adjusted p-value and number of genes is depicted.(D) Example bacterial and viral upDEGs and their expression levels (transcript abundance) across four categories of patients: those with neither bacteria or virus detected by culture or qRT-PCR; those with only bacteria, those with only virus, and those with both a bacteria and virus. Fig. 7 . Fig. 7. Host-response correlates with relative abundance of bacterial and viral pathogens.(A) Expression heatmap of bacterial upDEGs (bacterial host response genes), with samples (columns) sorted by total metatranscriptomic bacterial pathogen abundance.The associated metadata for all samples is also plotted above the heatmap.* Also shown is an outlier sample associated with a strong bacterial host response but with low detected abundance of MCAT, HFLU, or SPN.(B) Bacterial host response score versus metatranscriptomic bacterial pathogen abundance.The bacterial host response score was calculated as the mean expression level (Z-scores) of all the bacterial upDEG genes.(C) Expression heatmap of viral upDEGs (viral host response genes), with samples (columns) sorted by metatranscriptomic viral pathogen abundance.(D) Viral host response score versus metatranscriptomic viral pathogen abundance.The viral host response score was calculated as the mean expression level (Z-scores) of all the viral upDEG genes.(E) Heatmap of bacterial and viral host responses (upDEGs), where samples (columns) have been sorted into four groups based on high or low bacterial/viral pathogen abundance, with high considered as a 60 th percentile or greater relative abundance.In general, samples with low bacterial and viral abundance tend to lack a bacterial/viral host response, whereas samples containing bacteria, viruses, or both displayed the appropriate response.(F) Jitter plots of the bacterial and viral host response scores across four categories of samples.Bacterial and viral host response scores were calculated by averaging the expression level Z-scores of all bacterial and viral upDEGs, respectively. Table S1 to S9 Table 1 . Demographic and clinical characteristics of pediatric patient participants with sinusitis. Demographic and clinical data for study cohort comprised of 221 children with persistent or worsening symptoms consistent with a diagnosis of acute sinusitis.Enrolled patients were assessed for symptoms and symptom severity.Pathogen detection for 3 common bacteria and a panel of 14 viruses was accomplished using culture and qRT-PCR, respectively.Median (interquartile range), ¥ Only samples positive for Hflu were tested (N = 69) *
2024-03-05T08:03:39.929Z
2024-03-04T00:00:00.000
{ "year": 2024, "sha1": "cb5db0ae01a27ef965350969b76feecd0df7e346", "oa_license": "CCBYNCND", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2024/03/04/2024.03.03.24303663.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "93f0807d48889382cfa5c81b1c0f3465626fb571", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53533711
pes2o/s2orc
v3-fos-license
EFFECTS OF THE APPLICATION OF CONVENTIONAL METHODS IN THE PROCESS OF FORMING THE PICK-UP TRAINS This paper examines a problem of forming the pick-up trains by conventional methods (the Futhner method and Special method), aiming at establishing basic characteristics of track facilities and values of shunting operations indicators important for evaluation of the effects of these methods application. The problem under consideration has so far not been examined in the literature to a sufficient extent, although in practice it has been proved to be necessary. For this reason, in this work a simulation study has been undertaken the results of which have to give the values and measures for assessing the quality of station/yard operations, as well as the assessment of new-designed station solutions. INTRODUCTION With an increased volume of global trade, rail transport of goods has been growing daily and is getting a growing importance, especially taking into consideration significantly lower costs and competitiveness in comparison with alternative modes.At the same time in technical freight yards the volume and complexity of operations have increased, related to the processes evolving in forming and splitting the pick-up freight trains.To carry out the operations in the process of forming pick-up trains, technical freight yards use a special group of tracks, or end parts of sorting-departure tracks.In both cases, it is necessary to meet the conditions related to the number and length of tracks for final forming of pick-up trains.Although in practice for forming pick-up freight trains well known and long conceived methods known in literature as: Futhner, Special, Simultaneous and Japanese methods, the effects of their application have not been fully examined so far, especially as far as the indicators of shunting operations are concerned, which is important for defining the elements of the final solution of the station and for the choice of adequate organization and technology of operations in the process of pick-up train forming.The lack of information of this nature makes more difficult a quality planning and investing in new track facilities necessary for carrying out an increased shunting work in freight yards.In view of the complexity and a stochastic nature of the processes under consideration, the simulation represents one of the most efficient techniques that may be applied in the analysis of such systems behaviour, aiming at arriving at the necessary quantity indicators.The simulation models may be an efficient means to assess the necessary station and wagon capacity, as well as to analyse the impact of numerous factors determining the mentioned work processes. For this reason, the work presents a simulation study used to analyse the complex processes evolving during the forming and splitting of pick-up freight trains in technical stations, aiming at establishing the values of indicators important for the analysis of the real system under consideration and to take relevant decisions [1].The indicators analysed in the simulation model are: number of tracks, track lengths, numbers of splitting and forming, number of handled wagons (performance) and times required to carry out the mentioned activities. In determining the numerical values of these indicators, their functional dependences have been analysed in relation to: method applied, number of intermediate stations served by a technical freight yard, number of wagons by intermediate station, number of wagons per train, number of pick-up trains, number of groups per train (intermediate stations) and numbers of groups by intermediate station formed in the process of collection.The applied simulation study gives results that may be of help to designers and shunting dispatchers in their daily work. PROBLEM DEFINITION AND OBJECTIVE OF THE PAPER The pick-up trains are one of the three basic categories of freight trains carrying goods by rail.They run on line sections between two technical freight yards, at the same time performing operations in intermediate stations on the principle "leave, pick-up or exchange" wagons.In these intermediate stations the locomotives can perform operations with wagons in one of the following ways: pick-up wagons from the initial train composition formed in technical freight yards and leave them in loading-unloading and handling points in the yard, pick-up wagons from loading-unloading and handling points in the yard and connect them to the train, or perform both afore-mentioned activities, one after the other. Such operations of composition changing require additional shunting operations in the intermediate stations under consideration and significantly affect the speed of freight transport.At the same time, such work means a reduced carrying capacity of the railway and requires a larger fleet to meet the specified traffic volume.In short, such operations diminish the railways capacity and affect its quality of service.To what extent the indicators of its operation will be disturbed will greatly depend on the methods used for carrying out the railway operations and on the capacity employed for realization of the method used. In order to achieve the best possible effects, i.e. to speed up the process of carrying out the additional shunting operations in intermediate stations, the technical freight yards have an assignment to make necessary preparation of compositions for this purpose.This preparation involves the groupage and coupling of wagons to the train according to the sequence of intermediate stations along the line, and in order to meet this requirement, it is necessary to carry out additional forming of pick-up trains in technical freight yards, after the completed collection of wagons, on sorting, sorting-departure or on a special group of tracks, which is of a stochastic character.This process requires corresponding track and locomotive facilities, in combination with the scheduled organization and technology of operations, i.e. the methods for performing it (Figure 1).Owing to these requirements in technical freight yards, the assignments obviously increase, in relation to the volume of shunting operations [4]. To form the pick-up trains, a great number of methods are used in practice, of which the Futhner method and Special method are among the classical ones.Namely, they are the methods of consecutive forming of pick-up trains, which is the subject of consideration in this paper.These methods differ by both applied technology of wagons groupage in stations and possible track capacity used to carry out this process, and hence by effects of performing the whole process [5].Which and what type of track facilities are necessary to apply the mentioned method and what effects are expected in the considered process of forming the pick-up trains, are the most important questions to which the simulation model has to offer answers. A brief presentation of the Futhner method The Futhner method is the oldest method of forming the pick-up trains, It is named by the author Harry Futhner who was first to apply it in practice in 1880 in the Liverpool station, which had a small number of tracks and where the shunting operations were carried with great difficulties. According to Futhner method, by the means of two splittings and two repeated connecting of wagons or groups of wagons collected at random on tracks for collection of pick-up trains, one can quickly arrange wagons in groups for a number of stations (intermediate stations) making a square of the number of tracks available for a shunting operation in question [6,] or [9].Between the number of groups, and the number of intermediate stations and number of shunting tracks respectively, for arranging the groups by intermediate stations, there is an inter-dependence of forms: Where: n is -number of intermediate stations for which a pick-up train is formed; n t -number of tracks for shunting operations. In order to note more easily the characteristics of this method, a general example of forming the pick-up train is given, which has in its composition the groups of wagons for "n is " intermediate stations, and the forming itself is made on "n t " tracks (case ). The procedure is as follows: The wagons collected for the pick-up train from the sorting or sorting-departure track are first moved to the shunting track and then the process of sorting by destination station is carried out.In the first sorting of wagons for intermediate stations 1; n t +1; 2n t +1;…; up to n is -n t +1 are left on the first track, wagons for intermediate stations 2; n t +2; 2n t +2;…; up to n is -n t +2 to the second track and so on up to the wagons for intermediate stations n t ; 2n t ; 3n t ;…; up to n is which are left on the n t -th track (if ).In this way the first process of splitting up wagons by track is completed.This is followed by the first process of connecting wagons from individual tracks and their forwarding the shunting track in order to get prepared for the second process of sorting.For this process it is important to know the location of the locomotive and what should be the sequence of wagons in the train in relation to the locomotive.This means that one should know if the wagons starting from the locomotive toward the end of the train are arranged from the departure to the destination stations (from 1 to n is ) or from the destination to the departure station (from m to 1).This is because in the former case making up will be from the 1 st to the n t -th tracks and in the latter the other way round, from the n t -th to the first track.making-up process it is important to know that it is done by the same sequence as the previous one.Hence, if in the first process of making-up the sequence was from the 1st to the n t -th tracks and from the n t -th to the 1 st tracks respectively, then in the next makingup process the same sequence is maintained. The example of the process of forming the pick-train by stage, in the case where there are three tracks on which collection is made for 9 intermediate stations, is presented in Figure 2. Composition collected A brief presentation of the Special method The Special method of pick-up trains forming is also applied in stations equipped with small number of tracks, as in the case of Futhner method.In applying this method, there is no inter-relation between the track numbers and numbers of intermediate stations in which wagons have to be grouped up in trains.It can be applied on an arbitrary number of tracks, as opposed to the Futhner method, where it is not the case.Its application is reflected on the values of shunting operation indicators and the length of tracks on which this process takes place, which has not been investigated so far. To enable a better identification of this method characteristics, a general example of forming the pick-up train consisting of groups of wagons for "n is " intermediate stations, and the forming itself is made on "n t " tracks (case n is >n t ). The procedure is as follows: The wagons collected for a pick-up train from the sorting or sorting-departure track are first brought to the shunting track, and thereafter a process of sorting by destination station takes place.In the first sorting, the wagons for intermediate stations from 1 up to n t -1, are left on corresponding tracks, and all other wagons are forwarded to the n-th track.This is followed by the first process of connecting the wagons from different tracks, starting from the n t -th track, through n t -1 st , up to the 2 nd and their moving to the shunting track to get prepared for the next process of sorting.In this sorting process all wagons in groups for intermediate stations from the 2 nd up to n t -1 st are grouped up and sorted by their respective sequence, and they are therefore all left on the 1 st track, where the wagons for the first intermediate station are already placed.To these groups of wagons on the 1 st track the wagons for the n t -th intermediate station are added, and on other tracks the wagons are sorted by the following sequence: on the 2 nd track, wagons for n t +1 intermediate station, on the 3 rd track wagons for n t +2 and so on up to n t -1 st track where wagons for 2(n t -1) intermediate station arrive, and on n t -th track, the wagons for 2n t -1 st station and so forth up to the m-th intermediate station.Then the wagons are grouped up again by track, starting from the n t -th track up to the 2 nd one, and they are brought to the shunting track to be prepared for the third process of sorting, similar to the previous one.The process is continued until the wagons for the last intermediate station are sorted. In the case n is ≤ n t , during the first sorting a complete splitting-up of wagons by intermediate station is carried out, so that only a grouping-up -connecting of wagons by track is to be done. For a better insight into the process of forming a pick-up train by stage by means of this method, in case there are 3 tracks on which the wagons are collected for 9 intermediate stations, a presentation thereof is given in Figure 3. (Note: For this method, as well, it is important to know a location of the locomotive and what sequence of wagons in trains is required in relation to the locomotive.Thus, one should know whether the wagons, starting from the locomotive towards the train end are arranged from the departure to the destination station (from 1 up to n is ), or from the destination to the departure station (from n is up to 1).This is because in the first case the splitting up will be made by order from 1 st up to n t -th intermediate stations, and otherwise the other way round, from the n t -th up to 1 st intermediate stations) CONSTRUCTION OF THE SIMULATION MODEL The internal structure and design of the simulation model of forming the pick-up trains by the Futhner and Special methods are directly related to specified objectives.The basic objectives in this work can be brought down to the following assignment: for predetermined parameters, establish by the simulation method [8]: 1. Track lengths required for carrying out the scheduled processes (indicators important for design); 2. Values of performance indicators of the process of final forming of pick-up trains (indicators important for evaluating the functioning of the system under consideration).In practice so far, such problems have been resolved mainly on the basis of experience and intuition, without any system analyses for the purpose of taking necessary measures.Such approach resulted in adverse effects, which became obvious only after the realization.The simulation approach to this problem assumes quite different methodology including: 1. Set up the starting number of tracks in the station group of tracks on a real value. OUTPUT RESULTS Instead of complete output results which are numerous and have a standard GPSS/H form [2,3,9,11], the work presents a recapitulation of selected results of individual indicators important to: determine the size of track capacity on which the final forming of pick-up trains is made; evaluation of new-designed solutions of technical freight yards for the purpose of taking adequate measures before the yard and facilities are built and placed in operation and assess the quality of station/yard operation and functioning of the system, respectively, by the application of the method under consideration. The indicators of the system functioning comprises: number and length of tracks; time required for forming the pick-up train (sorting and groupage); wagon handling performance (number of wagons handled) and number of movements in connecting and splitting-up of wagons. These results are shown in tables T-1 -T-2 in figures 4 -7, and the symbols used have the following meaning: max n wi -maximum number of wagons appearing on the i-th track when forming the pick-up train; Tm -time needed to make up (connect -groupage) of wagons by track on the station group of tracks; Ts -time needed to split (sorting, splitting-up) of wagons on the station group of tracks by intermediate station; VSO -wagon performance (volume of shunting operations, i.e. the number of wagons moved from the collection track up to the return to the departure track); NM -number of movements in forming of pick-up train, i.e. the number of splittings and connections.The output results show that: Tracks on the process of final forming by the Futhner method are of approximately equal lengths, i.e. their mutual differences are not large, whereas with the Special method it is not the case.With this method the track lengths are notably unequal, where the first and the last tracks have equal -but notably greater lengths than the middle -inner ones, which also are of equal lengths.These data have not been known so far and not much attention has been paid to them in choosing the final solutions, although this might play a very important role in that part.The total time needed for forming a pick-up train comprises the time needed for distribution of wagons by track (Tm) and time needed for groupage (Ts).This time changes within the limits from 0.7 -2.1 (min/wagon), irrespective of the method applied.The wagons groupage time is 2 -4 times shorter than the time of their distribution and is 0.2 -0.5 min/wagon.This indicates a high level of stability, equalized duration of the process, irrespective of the method used.As regards the wagon distribution time by track and intermediate station, its value is considerably less stable, and this instability is more notable in relation to the change in the number of wagons than the number of intermediate stations.The total time needed to complete the whole process is longer with the Futhner method, than with the Special method by about 10%.The volume of shunting operations (actual wagons handled and the number of moved wagons, respectively) considerably exceeds the number of wagons in the train.This increase ranges from 3 -10 times in relation to the method applied.Thus e.g. with the Futhner method this increase is up to seven times, and with the Special method up to 10 times.This indicates that the operating costs when using the Special method would be higher than with the Futhner method. The number of movements (splitting and connecting) ranges within the limits from 0.7 -2.0 (movements/wagon), irrespective of the method under consideration.Although the results obtained point to a high level of stability of this indicator, more detailed investigations point to its higher sensitivity to the change in the number of groups and number of wagons in the train than to the total number of intermediate stations for which trains are formed. CONCLUSIONS Based on the research and the analysis of obtained results, the following conclusions may be drawn: The process of forming the pick-up trains by conventional methods (Futhner method and Special method) is relatively easy to model and simulate by means of the simulation language GPSS/H; The output results of such model show the actual situation on the application of the analysed methods and point to a series of failures in the process of design and operation so far in technical freight yards.These results, therefore, may be used as an additional argument in making important technological and investment decisions in the process of design and operation of new technical fright yards, or reconstruction of the existing ones, as well as for evaluation of the quality of operation of such yards; By extension of the input elements and by additional analyses it is possible to arrive at new important indicators in the application of these method, for which it is necessary to carry out a special study.By construction of this model more favorable conditions are established and an incentive is given to involve a wider public and the authors themselves in the further elaboration thereof, addressing this and other similar problems. 1 A-Figure 1 : Figure 1: Sequence of wagons in the composition: a) after the completed wagon collection; b) after the completed groupage of wagons Figure 2 : Figure 2: The forming of pick-up train by the Futhner method Figure 3 : Figure 3: The forming of pick-up train by the Special method Figure 4 :Figure 5 :Figure 6 :Figure 7 : Figure 4: Track lengths necessary for forming of pick-up trains up to 2n t on the second track and so on up to wagons for intermediate stations n is -n t +1; n is -n t +2;…; n is , which are left on the n t -th track.).In this way all sorted wagons from the pick-up train are arranged by the sequence of intermediate stations, but on different tracks.Therefore another process of connecting by track remains to be carried out, so that the wagons in train are fully arranged by the sequence of intermediate stations.In this After the forming is completed, another process of sorting starts, where all wagons in the group for one intermediate station are left on one track, for the second, not-neighbouring intermediate station on the second track and so on, until one arrives to the wagon with the serial number of the intermediate station which is a neighbouring station to one the wagons of which are already sorted.In that case such a wagon is added -sorted to that track.(Example: the wagons for intermediate stations 1; 2;…; up to n t -th are left on the first track, wagons for intermediate stations n t +1; n t +2;…; 2. Set up the composition of pick-up trains which are to be formed within a wider framework, in relation to the number of wagons in the train, number of intermediate stations and distribution of wagons by intermediate station.3. Reiterate of the experiment, changing the train composition and number of intermediate stations.4. Follow-up the impact of the changed train composition and numbers of intermediate stations on the system functioning indicators.13 and 16 intermediate stations with a real wagon distribution by intermediate station, derived from the analysis of operations of the Belgrade and Lapovo marshalling yards on the Railways of Serbia network (for the application of the Futhner method in the case with 5-9 intermediate stations three tracks are necessary, and in case with 10-16 intermediate stations four tracks are necessary); from 0.4 up to 0.7 min.-average time required for collection -sorting per wagon by track. Table 2 : Output results of simulation for 5, 10 and 16 intermediate stations (Special method)
2018-10-30T04:36:06.801Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "eb78602bfe426deec811a8a56d763ff924c3717e", "oa_license": "CCBYNCSA", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-02430702245I", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "eb78602bfe426deec811a8a56d763ff924c3717e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
220713338
pes2o/s2orc
v3-fos-license
Slow dynamics and ergodicity in the one-dimensional self-gravitating system We revisit the dynamics of the one-dimensional self-gravitating sheets models. We show that homogeneous and non-homogeneous states have different ergodic properties. The former is non-ergodic and the one-particle distribution function has a zero collision term if a proper limit is taken for the periodic boundary conditions. Non-homogeneous states are ergodic in a time window of the order of the relaxation time to equilibrium, as similarly observe in other systems with a long range interaction. For the sheets model this relaxation time is much larger than other systems with long range interactions if compared to the initial violent relaxation time. Introduction Lower dimensional models retaining the main characteristics of realistic systems has always been an important tool to grasp the phenomenology in Statistical Physics. They have been particularly important in understanding the nonequilibrium dynamics and equilibrium properties of systems with long range interactions, which often present unusual properties not observed if the interaction is short-ranged, as non-ergodicity, anomalous diffusion, non-Gaussian quasistationary states, negative microcanonical heat capacity, ensemble inequivalence, and a very long relaxation time to thermodynamic equilibrium, diverging with the particle number [1,2,3,4,5,6,7,8,9,15,10,11,12,13,14,16,17]. Some one-dimensional models have been extensively studied in the literature, such as one-dimensional plasmas [18], one-dimensional self-gravitating systems: the sheets and shell models [19], and derived models, e. g. the Ring [20] and the Hamiltonian Mean Field (HMF) models [21]. The dynamics of systems with long range interactions can typically be divided in three stages: a violent collisionless relaxation from the initial condition into a quasi-stationary state (or an oscillating state close to it), occurring in a very short time [22], followed by a very slow evolution to thermodynamic equilibrium, caused by the small cumulative effects of collisions (graininess). The final and third stage is the thermodynamic equilibrium, that may never be attained in the → ∞ limit, when the mean-field description becomes exact and the collisional contributions to the Kinetic equation vanish. In this limit, and under suitable conditions, the dynamics is exactly described by the Vlasov equation [23,7]. Let us consider a system of identical particles described by the Hamiltonian: with the interparticle potential ≡ (| − |), , the momentum and positions for particle , respectively, and the mass of the particles. The factor 1∕ in the potential energy term in Eq. (1) is introduced such that the total energy is extensive [24] (the so-called Kac factor). The one-particle distribution function ( , ; ) then satisfies the Vlasov equation: where the mean-field force is given by: Collisional effects modify the Vlasov equation such thaṫ = [ ] where the collisional integral [ ] is a functional of , usually obtained using some approximation such as the weak coupling limit, the interparticle force is taken to be of order ≪ 1 and [ ] is computed up to order 2 , or retaining terms of order 1∕ . The resulting kinetic equations are called respectively the Landau and Balescu-Lenard equations [25]. For one-dimensional systems the collisional integral in the Balescu-Lenard, Landau and Boltzmann equations vanish identically in a homogeneous state and one must go to the next term in the approximation, i. e. by computing [ ] up to order 3 or 1∕ 2 [26,27,8,9]. Let us consider the example of a system with a vanishing collisional integral for both homogeneous and nonhomogeneous states is given by identical particles in one dimension interacting only through zero-distance hardcore potential. In this case the interaction causes a swap of particle velocities, and by simply relabeling the particles at the moment of the collision one obtains a statistically equivalent system of free particles such that the one-particle distribution function only evolves due to the free flux, and the corresponding kinetic equation if then given by the one-dimensional Liouville equation with zero force: where is the mass, is the position, and the momentum. For a homogeneous state the one-particle distribution function is strictly constant, i. e. the collisional integral vanishes identically. Another yet simple model, but with long range interacting particles and real collisions (due to the discontinuity in the force at zero distance) is the one-dimensional self-gravitating system of identical particle with unit mass and Hamiltonian [28] = The force on particle is given by = ( ( ) − − ( ) + )∕ , where ( ) + and ( ) − are the number of particles to the right and left of the particle , respectively, and particles can cross each other freely. The potential in this Hamiltonian is obtained from the solution of the Poisson equation in one spatial dimension and corresponds to a system of infinite sheets with total finite mass. The dynamics of this model has been studied in the literature in the last few decades, with the recurrent question if the system does relax to thermodynamic equilibrium, due to the extreme slow slow dynamics of its macroscopic parameters [29,30,28,31,32]. Joyce and Worrakitpoonpon introduced an order parameter to measure the distance to equilibrium and showed that this system in a non-homogeneous state evolves to thermodynamic equilibrium [33]. They showed this for a number of particles up to = 800, and yet requiring a very large simulation time to observe the complete relaxation. This implies that the contribution of the collisional integral of the corresponding kinetic equation is very small. The very slow relaxation towards equilibrium also manifests in the ergodic properties of the system. A system with long range interactions is ergodic if averages of observables over the history of a single particle are equal to the ensemble average, i. e. to an average computed at a fixed time for the particles in the system. This approach was used for the HMF model [15,34] and for a two-dimensional self-gravitating system [10]. In the limit → ∞ these systems are non-ergodic, and never reach the true thermodynamic equilibrium, while for finite they are ergodic only after a time window of the order of the relaxation time to equilibrium. Here we show that this results are also valid for the one-dimensional self-gravitating system with Hamiltonian in Eq. (5) in a non-homogeneous state, but not the homogeneous case. Indeed in the former, we show that by properly considering periodic boundary conditions and then taking the limit of the size of the unit cell going to infinity, while keeping the density constant, the one-particle distribution function does not evolve in time, i. e. the collisional effects vanish. The paper is structured as follows: in Section 2 we discuss separately the ergodic properties of homogeneous and non-homogeneous states of the sheets model. The kinetic equation for the homogeneous state is obtained in Sec. 3 with identically vanishing collisional contributions. We close the paper with some concluding remarks in Sec. 4. Slow dynamics and ergodicity We investigate the ergodic properties of the sheets model system using the approach in Ref. [10]. The system is ergodic if time averages taken over a given time window of length equals the ensemble average over the -particles at this same fixed time , which we call ergodicity time. We define the time average of the momentum of the -th particle: and similarly the time average of its position: with a fixed time step Δ , = ∕Δ . We also consider the time dependent standard deviations (supposing the averages over all particles vanish ⟨ ⟩ = 0 and ⟨ ⟩ = 0): and Ergodicity for a system with long range interaction is then equivalent to [10] ( ) → 0 and ( ) → 0 for → . It was shown for the HMF model and for a two-dimensional self-gravitating system that ≈ , with the relaxation time to thermodynamic equilibrium [10,15,34]. We now consider separately the ergodic properties of non-homogeneous and homogeneous states of the sheets model. Non-homogeneous state In order to put in evidence the very large value of the ergodic time we implemented a molecular dynamics simulation of an open -particle system (no spatial boundary conditions) with Hamiltonian in Eq. (5) using and eventdriven algorithm [35]. The dynamics between two successive particle crossings is integrable, and can be computed up to machine precision. Collisions are then implemented straightforwardly by updating the force on the particles after each crossing. Due to very high local densities at the core of the spatial distribution, a high numeric precision is required and we used quadruple precision in order to avoid missing any collision due to round-off errors (which indeed occur for double precision). The initial state is a waterbag state defined by with 0 and 0 given constants. To measure the distance to the Gaussian distribution we use the reduced moments: The reduced moment of order 4 is called the Kurtosis of the distribution, and for any Gaussian distribution we have that 4 = 3 and 6 = 15. The left panel of Fig. 1 shows the time evolution of 4 and 6 for the system, with 0 = 10.0 and 0 = 0.5 for the initial condition. In this case the relaxation time to equilibrium is of the order of ≈ 10 6 . The right panel of Fig. 1 shows that the condition for ergodicity stated in Eq. (10) is satisfied for a value of time of the order of magnitude of the relaxation time for equilibrium ≈ . In order to discuss the physical meaning of ergodicity for a long range interacting system, we define the one-particle momentum and position probability densities ( ; ) and ( ; ) at a given time as the probability density for the given value of and , respectively. We also define the density distribution for the values of and for a fixed particle, say the -th particle, along its history, up to time , denoted by ( ; ) and ℎ( ; ) respectively. Then, in the present case, ergodicity is equivalent to the relations and for ⪆ ≈ . Figures 2 and 3 show these distributions for a few values of time, and also the spatial distribution function at equilibrium given by ( ) = sech( ∕Λ), with Λ = 4 ∕3 and the mean-field energy per-particle [36], and the momentum Gaussian distribution at equilibrium. It is evident that the time and ensemble distributions become very close as approaches . So the momentum and , and spatial and ℎ, distribution functions satisfy Eqs. (13) and (14) and are also equal to the equilibrium distribution for a time of the order of magnitude of the relaxation time to equilibrium, as it was also observed for other long range interacting systems [15,34,10]. Homogeneous state We now turn to the case of a homogeneous state. Periodic boundary conditions can be implemented using an Ewald sum with a unit cell ∈ [− , ] such that the force on each particle, due to the particles in the unit cell and the infinite number of images, is determined by a direct sum over replicas [37]. For the one-dimensional self-gravitating system a closed analytical form was obtained by Miller and Rouet [38] as an additional potential representing all replicas, and given by: The full effective Hamiltonian with periodic boundary conditions is then with ≡ ( 1 , … , ) and The resulting equations of motion are then integrated using a fourth order symplectic integrator [39,40]. The reduced moments 4 and 6 as a function of time, up to = 10 5 , are shown in Fig. 4, for an initial waterbag state with 0 = = 1 and 0 = 3. The system remains in a homogeneous state for the whole simulation time. We observe that the time evolution is extremely slow if compared to the non-homogeneous case, with only a very small variation in 6 visible in the graphic. Figure 5 shows the distribution functions ( ; ) and ( ; ) at the final time, also clearly at variance to what is observed for the non-homogeneous cases. Although the spatial distribution ( ; ) is roughly uniform, as expected since particles can cross each other and the mean-field force is very small, the momentum distribution ( ; ) is not even symmetrical, as it is the case for the non-homogeneous systems at all time values, except for a very short initial time. We conclude that the time for ergodicity, if finite, is certainly many orders of magnitude greater that for a non-homogeneous state. We will shed some light and explain the physical origin of this difference, and of the peculiar dynamics of the homogeneous state, in the next section by discussing the kinetic theory for a homogeneous state. From Eqs. (18), (21) and (22) we obtain the following equation for the two-particle correlation function [25]: Then one determines a solution for 2 in terms of 1 , and also a solution for 3 if it is not negligible (see [8] and [27] for systems where three-particle correlations are important), and use the result in Eq. (20) Now we turn to the one-dimensional self-gravitating system with Hamiltonian in Eq. (5) in a homogeneous state. The derivative of the total potential in Eq. (17) appears in Eq. (23) and we must consequently account for the singularity of its derivative at zero inter-particle distance. We consider the following relabeling of particle indices: at the moment two particles (sheets) cross each other, we interchange their labels. In this way, at each collision (at zero distance) particles simple exchange their momenta and the force is constant in time. If the particles are initially labeled such that < if < , the ordering in position is preserved. Then the force on particle due to particle can be written as = grav + HC where grav = − ′ ( − ), with given in Eq. (17), and HC stands for the hard-core force that swaps particle momenta when they collide at zero distance. The contribution of grav to Eq. (23) vanishes in the limit → ∞ as the gravitational force in a homogeneous state vanishes. To illustrate this fact, Figure 6 shows the force grav due to both the self-gravitating potential and the Ewald sum, for a few values of the number of particles but for keeping the density = ∕ constant. We note that increasing in this way is not equivalent to consider the thermodynamic limit that would correspond to take → ∞ but keeping constant. We observe that as the size = ∕ of the unit cell increases grav approaches zero. As a consequence, only contributions from hard-core collisions are retained in Eq. (23). This result in fact proves the validity of the Jeans Swindle for the model considered here, i. e. that the contribution of the background interaction to the infinite homogeneous contribution vanish, and one must consider only the effects of local fluctuations in density [42,43]. These fluctuations vanish as the size of the unit cell goes to infinity. The same reasoning can be used in an analogous way for the BBGKY hierarchy, which then take exactly the same form as the hierarchy obtained for a system of particles with a hard-core potential at zero distance as only interaction. For such a system in an homogeneous state, the one-particle distribution function 1 ( ; ) is strictly constant in time as the interaction only swaps the momenta of two particles at each collision, and three-particles processes are nonexistent (the probability that three particles collide at the same time at the same point is zero). For the same initial condition, the BBGKY hierarchy being identical for both systems, the time evolution for the reduced distribution functions must be the same, and therefore the distribution 1 ( ; ) for a homogeneous one-dimensional self-gravitating system is constant in time. Small deviations from this are expected to occur in numerical simulations due to spurious non-physical effects resulting from a finite value of , that result in small fluctuations of the value of the force around zero. Concluding Remarks We showed that the sheets model describing a one-dimensional self-gravitating system has profoundly different dynamic properties weather it is in a homogeneous or a non-homogeneous state. In the former case we showed that by considering a proper limit in the periodic boundary conditions the one-particle evolution function does not evolve in time, as its kinetic equation is essentially a Boltzmann-like equation. For the non-homogeneous state, the system has a slow dynamics to equilibrium, with a relaxation time much greater than other long range interacting systems if one uses the violent relaxation time for comparison. The non-homogeneous system is ergodic but only after a time of the order of the relaxation time to equilibrium, as also observed for other long range interacting systems, but it is non-ergodic in a homogeneous state, as illustrated by simulations presented here. A possible way to shed some light on the slow dynamics of this system in a non-homogeneous states is to obtain a kinetic equation, which for the present model is a challenging task as it requires the determination of action-angle variables for the mean-field description of the system [44,45], and has been possible only for very special cases (see [46] and references therein). This is the subject of ongoing research.
2020-07-24T01:00:25.308Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "132804ac6750eb3b057aa682b8347f982e2eb9b0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "132804ac6750eb3b057aa682b8347f982e2eb9b0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
198968107
pes2o/s2orc
v3-fos-license
Non-Markovian collective emission from macroscopically separated emitters We study the collective radiative decay of a system of two two-level emitters coupled to a one-dimensional waveguide in a regime where their separation is comparable to the coherence length of a spontaneously emitted photon. The electromagnetic field propagating in the cavity-like geometry formed by the emitters exerts a retarded backaction on the system leading to strongly non-Markovian dynamics. The collective spontaneous emission rate of the emitters exhibits an enhancement or inhibition beyond the usual Dicke super- and sub-radiance due to a self-consistent coherent time-delayed feedback. Introduction.-Long-distance interactions are a central tenet of many quantum systems and processes, including large-scale quantum networks, distributed quantum sensing and information processing [1][2][3][4][5]. When the separations between emitters become comparable to the coherence length of the photons mediating their interaction, interference properties of the electromagnetic (EM) field can be modified due to retardation. In such cases, the backaction of the EM field on the emitters leads to a coherent time-delayed feedback on the system dynamics [6,7], thus rendering it non-Markovian [8][9][10][11]. Additional insights into non-Markovian effects in such a regime can be gained from studying the simple, yet rich quantum optics phenomenon of collective spontaneous emission of two two-level emitters. Cooperative effects in spontaneous emission have an extensive historical background [32][33][34][35], and have been experimentally observed across a range of physical systems [6,[36][37][38][39][40][41][42]44]. While the influence of retardation on these effects has been previously studied in Refs. [2,3], the non-Markovian dynamics emerging in macroscopically delocalized collective systems is yet unexplored. In this letter we study the collective radiative dynamics of a pair of macroscopically separated emitters and show that it exhibits non-Markovian features caused by a self-consistent coherent time-delayed feedback. We specifically consider here the emitters are prepared in a superor sub-radiant electronic state, and present an exact analytical solution of the dynamics of the collective spontaneous emission. We demonstrate that the retarded backaction of the EM field on the emitters can lead to a further enhanced (inhibited) spontaneous emission rate for superradiant (subradiant) states beyond the usual Dicke superradiance (subradiance) [32,33]. We consider two two-level emitters coupled to a waveguide are separated by a distance d comparable to the coherence length ∼ v g /γ of a spontaneously emitted photon, with v g being the group velocity of the field and γ the spontaneous emission rate of the individual emitters (see Fig. 1). To gain an intuitive understanding of the non-Markovian nature of this system, consider the following apparent "superradiance paradox:" Assume that the distance d between two emitters prepared in a superradiant state is smaller than the coherence length of an FIG. 1. Two two-level emitters prepared in a collective state coupled to an optical waveguide. The emitters are located at positions x1,2 = ±d/2, with d comparable to the coherence length ∼ vg/γ. The rates γ3D and γ1D refer to the emitter spontaneous emission rates into free space and guided modes respectively. The mode operatorsâ(ω) andb(ω) refer to annihilation operators for the right-and left-propagating waveguide modes respectively. arXiv:1907.12017v2 [quant-ph] 2 Feb 2020 independently emitted photon, but larger than that of a superradiant photon, v g /γ > d > v g /(2γ). Given that superradiance is an interference effect, one would expect to observe superradiant emission if there is no way to distinguish which atom emitted the field [53]. Now if the emitters radiate collectively, with an emission rate 2γ, then the coherence length of the emitted photons (v g /(2γ)) is too short to allow for the fields radiated by the two emitters to interfere, suggesting that they should have emitted independently. On the other hand, if we assume that they do emit independently, then the coherence length of the emitted photons (v g /γ) is long enough that there should be interference and as a result the emitters should emit at the superradiant rate of 2γ instead. This seeming paradox points to the failure of the Markov approximation: the conventional notion of an exponential decay defining the photon coherence length is no longer valid, and it is necessary instead to consider a full non-Markovian treatment of the system dynamics. Formal development.-The total Hamiltonian for the emitters+ field system is is the free Hamiltonian for the field, withâ (ω) andb (ω) the annihilation operators for the right-and left-propagating field modes of the waveguide respectively; and H int is the emitter-field interaction Hamiltonian. We proceed by making the electric-dipole and rotating-wave approximations (RWA) and expressing the emitters-field interaction Hamiltonian in the interaction picture with respect to the total free Hamiltonian where g(ω) is the atom-field coupling strength [54,57]. To isolate the non-Markovian behavior arising from the retardation effects from that due to a structured reservoir we assume a flat spectral density of the field modes around the resonance of the emitters such that g (ω) ≈ g (ω 0 ). Assuming that the total emitters plus field system is initially prepared in the single-excitation manifold, and considering that in the RWA the Hamiltonian preserves the total number of excitations, the state at time t > 0 is where c m and c a,b (ω) are the excitation amplitudes for the m th emitter and the guided field modes with frequency ω respectively, and |g, g, {0} is the ground state of the total system, with |{0} the field vacuum state. Tracing out the field modes, evolution of emitter excitation amplitudes is given by [62] for m = n, where φ p ≡ k 0 d = 2pπ is the field phase difference upon propagation, which we assume to be an integer multiple of 2π, γ ≡ γ 1D + γ 3D is the total spontaneous emission rate, and γ 1D = βγ ≡ 4π |g (ω 0 )| 2 is the spontaneous emission rate into the waveguide, with β the coupling efficiency of the emitters to the waveguide. The second term in Eq. (3) represents the retarded backaction of the other emitter via the field with a delay t = d/v g . For emitters initially in the super-or sub-radiant states Ψsup wheres ≡ s/γ and η ≡ dγ/v g is the separation between the emitters normalized by the photon coherence length. are the Laplace space probability amplitudes for the super-and sub-radiant cases respectively [64]. Consider next the case where the emitters are slightly separated, η 1. Up to linear terms in η which yields an effective spontaneous emission rate For a small but finite delay 0 < η 1, this can potentially exceed the usual Dicke superradiant emission rate of 2γ for β = 1. Also, for a subradiant state with an imperfect coupling (β < 1), the effective decay for slightly separated emitters can be slower than that for coincident ones. This surprising enhancement and inhibition of the collective spontaneous emission can be attributed to a stimulated emission as the correlated field emitted from one of the emitters interferes with that from the other [65]. The separation dependence of the collective emission rate in addition to the phase difference, demonstrates the influence of retardation on the interference. We now consider the general case of arbitrarily separated emitters, for which we present an exact analytical solution of the equations of motion (3) based on a welldeveloped mathematical treatment of delay differential equations (see [1] and the Supplemental Material (SM) [67] for details). The general expression for the excitation amplitudes of the emitters is where α (±) n ≡ 1 + W n ∓ η 2 βe η/2 −1 and the effective decay rate γ the n th branch of the Lambert W -function, which is commonly used to describe systems that exhibit timedelayed feedback [1,62]. We now discuss the consequences of this analytical solution, which is the main result of this work. Results.-Consider first the dynamics of a superradiant initial state. From Eq. (8) and the properties of the Lambert-W function one finds that the superradiant solution has imaginary exponents for η > η c , where we have introduced the normalized critical distance η c ≡ 2W 0 (1/(eβ)) [62]. Thus for η ≥ η c , the atomic excitation amplitudes exhibit oscillations as the atoms decay to their ground state. These can be understood in terms of a field wavepacket bouncing back and forth between the emitters [2,67]. For β = 1 this occurs for separations d > 0.56v g /γ, as shown in Fig 2. For separations η < η c the emitters radiate independently until a time γt = η and collectively afterwards, with an instantaneous decay rate given by For a given value of β, this rate reaches a maximum γ max inst when the normalized emitter separation equals its critical value η = η c , with γ max inst /γ = 1 − W0(−1/e) W0(1/(eβ)) , as shown in Fig. 3. In the absence of losses and for perfect emitterwaveguide coupling efficiency (β = 1) the maximum instantaneous spontaneous emission rate is γ max inst /γ ≈ 4.59, in stark contrast with superradiant emission in Markovian systems. In the case of a subradiant initial state, for β = 1, the steady state of the dynamics corresponds to a BIC [28]. The probability of reaching the BIC starting initially in the subradiant state of the atoms is given by | Ψ (t → ∞) |Ψ BIC | 2 = 1/ (1 + η/2) [68]. The total probability of the emitters being excited in the steady state as c sub 1,2 (∞) 2 → 1 2(1+η/2) 2 , see Fig. 2 (b). We also note that for an initial subradiant state with a delay of η ≈ 0.8, it is possible to achieve a maximal emitter-field steady state entanglement [67]. It is also instructive to explore the cooperative nature of the atom-field dynamics from the perspective of the emitted field intensity I (x, t) ∝ and |Ψ(t) the state of the system (see [67] for details). Fig. 4 shows that the fields emitted by the two emitters in the superradiant (subradiant) case interfere constructively (destructively) when the light cones of the two emitters reach each other. Thereafter, depending on their relative phase they produce an interference pattern that is either constructive, leading to a collective 'superduperradiant' burst with an instantaneous emission rate greater than 2γ, or destructive, resulting in the a perfect reflection of the field into the optical cavity created by the two atoms. The non-exponential decay of the emitters is an unambiguous signature of the non-Markovian evolution of the system, a result of the selfconsistent backaction of the EM field bath, which is accounted for, by a departure from the usual Lindblad dynamics. We further quantify the non-Markovianity of the system in the Supplemental Material [68], which shows that the system is non-Markovian for any value of η, approaching a Markovian behavior for η → 0. Noting that in the presence of delay the instantaneous collective decay rate can exceed that of standard Dicke superradiance, one might wonder if the total collective emission into the waveguide also gets enhanced. An important figure of merit to quantify the collective nature of the system in this regard is its cooperativity C ≡ γ in /γ 3D [69], such that γ in = lim t→∞ ∞ 0 dω |c a (ω, t)| 2 + |c b (ω, t)| 2 is the fraction of the field emitted into the waveguide and γ 3D = γ(1 − β) is the fraction of the field that escapes out to the nonguided modes [70]. This can be evaluated as [67] Csup For η > 0, the cooperativity for a superradiant state is reduced compared to that of coincident emitters (η = 0) as the total collective emission into the guided modes decreases with the emitter separation. In contrast, for an anti-symmetric state we find an enhanced emission into the waveguide as η is increased. This is due to the emission of the field into guided modes by the individual emitters until γt = η, before they start acting collectively (see Fig. 4 (b)). Given that cooperativity is an important figure of merit in quantum information applications, this result illustrates that retardation effects need to be carefully considered in quantum network protocols based on long distance emitters [71]. Summary and outlook.-We have shown that the collective radiative decay of two emitters coupled to a onedimensional waveguide is subject to non-Markovian modifications due to the time-delayed backaction of the electromagnetic field upon the emitters. When prepared in a superradiant initial state they can exhibit timedependent decay rates that can instantaneously surpass the standard Dicke superradiance rate. The system also allows for long-lived subradiant states characterized by a bound state in the field trapped in the region between the emitters. These effects can be understood as a combination of Dicke super-or subradiance and a retardation of the field wavepacket where the electromagnetic field senses its boundary conditions with a significant delay. A key parameter for characterizing the dynamics is the emitter separation relative to the photon coherence length η ≡ dγ/v g . It captures the combined physical origin of non-Markovian behavior, as an appreciable value of η can be achieved by increasing the emitter separation d, but also by increasing the system-environment coupling as in [31] or by exploiting slow group velocities achievable in the presence of a band gap or near a band edge [51]. Importantly, as η is increased the system dynamics requires keeping track of field correlation functions of increasing order. We note that the non-Markovianity in this case arises explicitly due to retarded backaction effects, despite having a flat spectral density for the bath. Experimental observations of these effects could be realized across a number of platforms, including quantum dots in photonic waveguides [6], atoms near optical nanofibers [5,42,72], and superconducting qubits coupled by coplanar waveguides [7,8]. Table I in the SM [67] summarizes experimental parameters accessible so far. For a system of atoms coupled to nanofibers, values of η ∼ 1 have already been realized [5]. Given the rapid experi-mental progress in all these platforms, the retarded collective effects studied here can become relevant in a near future. With the emerging possibility of preparing collective dipoles subject to internal retardation effects and observing their associated complex dynamics in sharp contrast with the more familiar case of emitters confined in subwavelength regions, our work adds a new intricacy that has been little explored in the past. Given that the enhancement in the retarded collective decay of two emitters relies on a pairwise time-delayed feedback, it will be interesting to determine the scaling of these effects with the number of emitters. We also note that similar dynamics can arise in a system of linear oscillators [16], indicating that such collective retarded dissipation should be observable in classical systems as well. It would be then interesting to extend the present dynamics from the single-excitation case considered here to multiple excitations, where one can observe genuinely quantum non-Markovian effects, such as the phenomenon of superfluorescence [76] with retardation, where all the emitters decay collectively from a fully excited state. ALTERNATIVE SOLUTION IN TERMS OF WAVEPACKET OSCILLATIONS Consider Eqs. (4) and (5) from the main text, noting that (s + 1/2) / β/2e −ηs < 1 such that the denominator in Eqs. (4) and (5) in the main text be expanded as the following series (S20) The dynamics in this limit has been previously studied [S2-S4], leading to the atomic excitation amplitudes. Taking the inverse Laplace transform in the above equation, we obtain where the index j physically corresponds to the number of round trips of the photon wavepacket between the two emitters. The multiple reflections of the photon wavepacket modify the time evolution of the spontaneous decay, making it non-exponential, as given in [S2]. It is helpful to consider this physical picture and expansion in the large η limit, as it allows one to see how the wavepackets bounce between the atoms. However, the solution in terms of W -functions allows one to understand the collective dynamics of the emitters more effectively, allowing one to -(1) determine the critical separation η c up to which the emitters decay monotonically, as opposed to exhibiting oscillatory dynamics, (2) evaluate the instantaneous spontaneous emission rate for the atomic decay after the onset of collective dynamics, and (3) calculate the probabilities of atomic excitation in the steady state when a BIC state is formed. emitter. It can be seen that the two light cones interfere constructively with each other going outwards from the atoms and contribute to a beyond superradiant burst, as can be seen from Fig. 4 of the main text. Similarly for the subradiant state We note that as an important difference from the superradiant case, for n = 0, γ to diverge due to a pole contribution on the real axis (which corresponds to a divergent self-energy term otherwise). We therefore take only the principal value of the integral to obtain Again, similar to the superradiant case, the four terms correspond to fields emitted by the two atoms into the left and right propagating field modes, which in this case interfere destructively with each other outside of the atomic cavity, and form a standing wave inside of the atomic cavity, as illustrated in Fig.4 in the main text. COOPERATIVITY We remark that the cooperativity for the system can be calculated as C = γ in /γ 3D as discussed in the main text. The total emission into the waveguide can be calculated as Substituting (S24) in the above this can be simplified to Eq. (10) in the main text. NON-MARKOVIANITY MEASURE VIA COHERENCE We calculate the coherence measure of non-Markovianity as defined in [S13]. We first calculate the l 1 -norm of coherence as where ρ(t) refers to the system density matrix corresponding to the two emitters given by This yields the l 1 -norm of the coherence for the super-and sub-radiant states as Further considering a general initial state in the single-excitation subspace we find the time evolved state as The l 1 -norm of the coherence for the above state is given as For a given initial state, one can further define the coherence measure of non-Markovianity as In the above expression, we calculate the non-Markovianity measure for a restricted initial state space of the system [S13], considering the initial state space of single emitter excitation. Since the integral is defined over only positive values of dC/dt, a non-zero value of N immediately implies that the system is non-Markovian. We can also obtain a non-Markovianity measure for the super-and sub-radiant states as We now optimize the coherence measure of non-Markovianity over the initial state parameters to obtain N = N sub . This shows that according to the coherence measure of non-Markovianity, an initially subradiant states remains the most non-Markovian throughout the evolution. CORRELATION BETWEEN THE EMITTERS AND THE FIELD We calculate the correlations between the emitters and the EM field via the linear entropy of the reduced system density matrix defined as [S14] S = 1 − Tr ρ 2 . (S49) Evaluating the above for a super-and sub-radiant state sub-radiant states of the emitters with different delays. We note from the dotted line in (b) that the emitter-field entanglement is optimized for a specific value of the delay η ≈ 0.8 for an initial subradiant state of the emitters. This corresponds to a BIC. POSSIBLE IMPLEMENTATIONS Table we lists some potential systems that study the system of two emitters coupled to waveguides experimentally, and the parameter values that have been realized in these platforms.
2019-07-28T05:26:15.000Z
2019-07-28T00:00:00.000
{ "year": 2019, "sha1": "1e2c1e03d314c948b3ac865ef15c520db025109a", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/125112/1/PhysRevLett.124.043603.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "772b8d23c4296cf9ed1b3ecb867c9e81813caa7e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
244488542
pes2o/s2orc
v3-fos-license
Non-invasive hemodynamic analysis for aortic regurgitation using computational fluid dynamics and deep learning Changes in cardiovascular hemodynamics are closely related to the development of aortic regurgitation, a type of valvular heart disease. Metrics derived from blood flows are used to indicate aortic regurgitation onset and evaluate its severity. These metrics can be non-invasively obtained using four-dimensional (4D) flow magnetic resonance imaging (MRI), where accuracy is primarily dependent on spatial resolution. However, insufficient resolution often results from limitations in 4D flow MRI and complex aortic regurgitation hemodynamics. To address this, computational fluid dynamics simulations were transformed into synthetic 4D flow MRI data and used to train a variety of neural networks. These networks generated super resolution, full-field phase images with an upsample factor of 4. Results showed decreased velocity error, high structural similarity scores, and improved learning capabilities from previous work. Further validation was performed on two sets of in-vivo 4D flow MRI data and demonstrated success in de-noising flow images. This approach presents an opportunity to comprehensively analyse aortic regurgitation hemodynamics in a non-invasive manner. Introduction Aortic valve regurgitation, or aortic regurgitation (AR), is a common type of valvular heart disease where the aortic valve does not close properly, causing reflux of blood from the aorta into the left ventricle [1]. This reflux of blood is known as the regurgitant jet. Diagnosis and severity of AR is determined by evaluation of flow metrics, for example peak velocity, pressure drop, and regurgitant volume. Cardiovascular four-dimensional (4D) flow magnetic resonance imaging (MRI) is a novel imaging technique to quantify full-field blood flow velocities, providing a three-dimensional (3D) velocity field across a region of interest throughout the cardiac cycle. Currently, due to the small width of the regurgitant jet and the limited spatiotemporal resolution of 4D flow MRI, it fails to accurately capture the complex hemodynamics of AR. The combination of computational fluid dynamics (CFD) and deep learning with 4D flow MRI will help to generate higher resolution images and recover hemodynamic parameters lost in current MRI images, with a target upsample factor of 4. This work will extend what has already been completed in 4DFlowNet [2] by using a wider range of flow characteristics to mimic AR, improving the data augmentation steps, and enhancing the artificial neural network with newer architecture structures. Aortic Regurgitation AR occurs when the aortic valve does not close properly, causing blood to flow back into the left ventricle from the aorta. This forces the heart to work harder and pump more blood to the aorta, which can cause further heart problems in the future. AR also has varying levels of intensity, from trace or mild through moderate to severe [1]. Acute AR is considered a medical emergency as it can cause severe pulmonary edema and hypotension, that is, excess fluid in the lungs and low blood pressure, respectively. Patients with AR are monitored yearly with echocardiography to decide whether replacement of the aortic valve is necessary [3]. Flow metrics such as peak velocity and pressure drop are typically calculated non-invasively using 2D velocities from 2D Doppler echocardiography [1]. Due to limited information available in 2D, this method is known to overestimate pressure drops [4]. 4D Flow MRI 4D flow MRI is an established imaging technique that captures the temporal changes of 3D blood flow patterns within individual vascular structures [5,6]. Velocities of blood particles are encoded in the phase of the MRI signal while the anatomy is visualised from the signal's magnitude [7]. However, 4D flow has several limitations, such as low spatiotemporal resolution, long scan time, and low signal-to-noise (SNR) ratio [8], which makes its clinical application to AR difficult. With a spatial resolution between 1.0 and 3.5mm [6], details on the narrowest part of the jet cannot be captured as its width is typically much smaller than 3mm [9] for mild AR. Therefore, spatial resolution is the biggest limitation in 4D flow MRI. Deep Learning in 4D Flow MRI Deep learning [10] has had a significant impact in many scientific sectors, and is highly relevant in the field of medical imaging [11]. Advances in super resolution image reconstruction [12] to obtain high-resolution (HR) images from low-resolution (LR) observations are increasingly being adopted for MRI with a deep learning-based approach [13,14]. This approach is preferred as it not only has an advantage in spatial resolution quality over conventional super resolution techniques [15], but also successfully denoises flow images [16]. The combination of 4D flow MRI with deep learning has been explored in multiple ways to increase resolution and provide more accurate estimates of physical quantities [2,17,18]. However, there are several limitations involved in this approach. Primarily, these have been related to insufficient data [2,18], which is due to the requirement of paired LR and HR MR images. This can be difficult to obtain as HR MRI takes long scanning times and is subject to motion artifacts [2,18,19]. As an alternative, CFD models have been used to simulate 4D flow MRI as ground truth HR images, which are then downsampled to LR images [2,18]. Other limitations include unstable and non-robust network architectures [17], which describe the organisational structure of the network's layers. The architectures plays a significant role in the performance of the DL algorithm, as well as ignoring phase/velocity aliasing error [2,16]. The aliasing error here refers to aliasing from having a velocity encoding (VENC) [20] that is too low [21] rather than other types of MRI spatial aliasing which have been explored previously and reduced [22][23][24]. Note that VENC is an MR parameter to adjust the maximum velocity corresponding to a 360 • phase shift in the data. Network Architecture Recent development in object detection has proven significant in advancing ANN architecture [25], and appears to be widely used in many medical imaging applications [11]. Examples include residual blocks [26], dense blocks [27], and cross stage partial blocks [28], which show promise to increase network capacity mitigating degradation and memory utilisation issues. Data Generation Modelling of the aortic valve was done using Ansys 2021 R1, which has two main options for CFD simulations; CFX and Fluent. CFX was chosen as it is better suited for more simple, low Mach number flows, such as the flow through the aortic valve. Modelling was an iterative process to determine the best parameters and options that would increase efficiency and accuracy. Geometries The design of the problem geometry focused on simplicity over replicating a reallife aortic valve, allowing for geometries to be created easily. The basic geometry is shaped like a cylinder with a constricted section part way along the length, resembling a Venturi tube. The constricted section represents the gap in the aortiv valve present for aortic regurgitation to occur. Figure 1 shows a sketch of the basic geometry. In total, 20 different geometries were generated. The first set of 10 geometries had variations in inlet velocity, inlet radius, and constricted section radius, while the other set of 10 geometries had different shapes. These geometries were designed to capture maximum jet velocities between 2.5 and 5.0 ms −1 . The second set of geometries were based on the third geometry, as this geometry was reasonably small with minimal computational time. This set had diagonal and off-centre constricted sections, as well combinations of both, to model eccentric jets. Figure 2 demonstrates these differences in shape. The parameters for all geometries are laid out in Table 1. Boundary Conditions The relevant boundary conditions are those relating to the inlet, outlet, and inner wall. The inlet boundary conditions take the most work to define as the velocity varies both in space and time. For blood flowing through the AV, the flow is expected to be fully developed, that is, the velocity is zero at the inner walls due to friction and at its highest in the centre. This can be achieved by extending the length that the blood needs to travel from the inlet to the constricted section, allowing the flow to fully develop before reaching the aortic valve. However, extending the length means the geometry becomes larger, hence increasing the computational time. To compensate for this, the velocity profile at the inlet was defined with a parabolic shape and the upstream length was set to 20mm [29]. This means the flow will start out more developed than with a uniform profile, and become fully developed before reaching the aortic valve. The velocity was also time-dependent and represented the diastole (where regurgitation occurs), with a rapid initial increase in magnitude before slowly decreasing [9]. The parabolic velocity profile at the inlet in Cartesian coordinates was defined by where vmax is the maximum velocity at the centre of the inlet and RI is the inlet radius. The remaining boundary conditions were for the outlet and walls. The outlet was defined as an opening with zero pressure difference and the walls were defined as non-permeable with no-slip boundary conditions. Data Preparation To start, the raw CFD simulations and geometries, which each had 71 timeframes, were sampled onto a uniform Cartesian grid to be used as HR images. This was completed with linear interpolation, and took from 10 minutes to 3 hours per time frame depending on the size of the geometry and the CFD data. To separate the fluid and non-fluid regions for better data processing and result quantification, binary masks were generated using k-Nearest-Neighbours [30]. The main difference between these synthetic HR images and MR images relates to the voxel size, VENC, and the amount of noise -HR images are noise-free whereas MR contain phase noise. The LR MR images were obtained from the HR images using the same method as in 4DFlowNet [2] to simulate 4x downsampled MR images with appropriate noise and VENC. This gives the paired LR and HR synthetic images used in network training. Data Augmentation To augment the data set, similar techniques were used as in 4DFlowNet [2] for each time frame. VENC values were randomly chosen from a set of velocities between 0.3ms −1 and 6.0ms −1 , spaced by 0.3ms −1 , for each velocity component. Aliasing was mostly avoided by choosing a VENC larger than the peak velocity. However, since velocity jets cannot be estimated beforehand for actual aortic regurgitation cases (which may cause phase aliasing), a VENC lower than the maximum velocity was chosen with a 10% probability, randomly selecting between 0.3ms −1 or 0.6ms −1 lower. Some of these aliased patches are shown in Figure 3. Constant intensity values between 60 and 240 were randomly chosen for the magnitude image, and noise levels were added depending on the signal-to-noise ratio, which were randomly and uniformly chosen between 14 and 17 decibels. Since there were a limited number of geometries, further augmentation came in patch generation. From each time frame, 10 patches of 12x12x12-voxel cubes from the LR image were selected randomly with a minimum fluid region of 20%. These patches acted as random translations, so no extra translation steps were taken. On top of this, for each patch generated another randomly rotated version of the patch was also created. This resulted in 20 patches generated from each time frame and thus 1420 patches per geometry. Training and Validation To investigate the effect of additional geometries regarding SR image quality, a subset of the data consisting of patches from only five geometries was compared against a the entire data set consisting of patches from all geometries. The validation set in both cases was the same, consisting only of patches from a single geometry. To investigate the effect of aliasing, duplicates of the two training and validation sets were generated, but with a 10% probability of having a VENC lower than the maximum velocity in any time frame. Networks trained using these aliased data sets were validated against the previously generated validation set without any aliasing, as well as a newly generated validation set with full aliasing, that is, with each time frame having a VENC lower than the maximum velocity. Network Architecture The simulated pairs of LR and HR synthetic images were used to train a similar deep residual network structure to the one in 4DFlowNet. This consisted of several residual blocks surrounding a central upsampling layer, with the preceding blocks in the LR space pre-processing and acting as denoisers for the input while the following blocks in the HR space refine the output. In 4DFlowNet, LR patches of 16-voxel cubes were used as input and SR patches of 32-voxel cubes were generated as output, with an upsample factor of 2. Several changes were made to the above 4DFlowNet architecture to provide higher resolution images with improved accuracy. Firstly, the upsample factor was increased to 4 and the sizes of the input and output patches were changed to 12-voxel and 48voxel cubes, respectively. The smaller patches account for smaller vessel sizes in the cardiovascular space around the aortic valve [18]. Secondly, the dense and cross stage partial blocks in DenseNet and CSPNet, respectively, were experimented with by using them in place of the 12 residuals blocks in the original 4DFlowNet architecture. The growth rate [27], defined as the number of feature maps in each convolutional layer, of the dense and CSP blocks was set to 16, a quarter of the number of channels in each convolutional layer from the original residual blocks. The adapted 4DFlowNet architecture with residuals blocks (4DFlowNet-Res), with dense blocks (4DFlowNet-Dense), and with cross stage partial blocks (4DFlowNet-CSP) had 3.34, 2.55, and 2.08 million parameters, respectively. These modified networks were implemented with TensorFlow 2.0 [31] and trained using an Adam optimiser [32], with an initial learning rate of 10 −4 and decay rate of √ 2 after every 14 epochs. Batch sizes of 16 were used, with training completed in 200 epochs. Loss Function The network was optimised by minimising the mean squared error (MSE) between the paired HR images and the SR images generated from the corresponding input LR ones. The voxel-wise loss was calculated as the mean sum of squared differences between each velocity component: where N is the total number of voxels in the geometry, v j is the predicted SR velocity, and vj is the actual HR velocity, for j ∈ {x, y, z}. The MSE of fluid and non-fluid regions were calculated as separate terms due to the imbalance and irregularity of these regions within a specific patch. This gives the total loss to be: where LMSE F and LMSE N are the voxel-wise loss for the fluid and non-fluid regions, respectively. The original loss function in 4DFlowNet contained a weighted velocity gradient term to smoothen the gradient between neighbouring vectors [2]. This was omitted from the above loss function as improvements were observed in near-wall velocity estimates with its removal [18]. Evaluation Metric The relative speed error (RE), the relative difference between the SR velocity magnitude (speed) compared to the actual HR speed on the validation set, was used to measure network performance and save model checkpoints. This was only calculated in fluid regions to avoid zero division error, as well as adding a small number ( = 10 −4 ) to the denominator. Furthermore, since many speed values in the HR images were quite small, this could risk significantly over-penalising the model. Thus, an arctangent approach [33] was adopted, giving the following equation for relative speed error: where N is the total number of voxels in the fluid domain, v j and vj are the predicted SR and actual HR velocities, respectively, for all j ∈ {x, y, z}, and 'arctan' is the arctangent function, defined for all real values from negative infinity to infinity with limx→∞tan −1 x = π 2 for arctan x. In addition to the RE, network performance was also evaluated using the root mean squared error (RMSE) and the structural similarity (SSIM) metric [34] in all three Cartesian velocity components. These were compared against the baseline 4DFlowNet model that had been trained with an upsample factor of 2. Results Training was performed using a Tesla V100 GPU with 32GB memory with networks being trained for 200 epochs. Improvements in relative speed error (RE) plateaued around the 100 epoch mark for 4DFlowNet-Res while still improving for 4DFlowNet-CSP and 4DFlowNet-Dense up till the very last epoch. This can be seen in Figure 4. The time taken was dependent on the type of network; 4DFlowNet-Res, 4DFlowNet-CSP, and 4DFlowNet-Dense took approximately 163, 168, and 255 hours, respectively. Note that these times were for the networks trained using all geometries. For the networks trained using only five geometries, denoted as 4DFlowNet-Res5, 4DFlowNet-CSP5, and 4DFlowNet-Dense5, the times taken were approximately 38, 40, and 63 hours, respectively. There was no significant difference in training time between networks trained with and without a portion of aliased data, denoted by '-A'. Networks were tested on one complete geometry consisting of 71 timeframes with no (phase/velocity) aliasing, and the same complete geometry with full aliasing. These predictions were required to be patch-based since patches were used as the input and output for each network. The complete geometry was reconstructed by stitching together multiple SR velocity field patches, which was done with a stride of (n − 4) in each Cartesian direction where n is the arbitrary patch size. To avoid patch artifacts at the boundary, four voxels were stripped from each patch side. Synthetic MR Images SR images were analysed visually and quantitatively to better understand how each model was performing. Figures 5 and 6 are visual examples of the prediction for the different networks in the constricted section at the peak flow. These display the effectiveness of each network in reducing noise, with the predictions looking quite similar to the ground truth. Furthermore, the networks trained with a proportion of aliased data seem to be performing better than networks without, especially for data with aliasing error. The values for each evaluation metric were collected and compared in Tables 2 and 3 to quantify the performance of every model. Again, these values were taken The metrics were plotted in Figure 7. Briefly, 4DFlowNet-Dense-A and 4DFlowNet-CSP-A seem to perform the best with the lowest RMSE error and largest SSIM in the principle flow direction (vx), respectively, on both aliased and nonaliased data. However, there does appear to be considerable variation in these results between different networks, depending heavily on the size of the data set and whether aliasing is present. Or equivalently, MAAPE. Regarding the RMSE in the principle flow direction (RMSEx), all networks perform better, on average, than the base 4DFlowNet with less variation in RMSEx. For the smaller data set with five geometries, the residual-based (Res) versions seem to have the smallest RMSEx. There is also noticeable improvement in error between networks trained on the smaller and larger datasets when predicting on non-aliased data. However, when predicting on aliased data, the improvement is significantly more apparent. The increase in dataset size improves the RMSE in the other two flow directions for all networks too, with the CSP versions somewhat better than other versions. Regarding the SSIM in the principle flow direction (SSIMx), all networks also perform better, on average, than the base 4DFlowNet. However, the SSIMx seems to worsen for a larger dataset, in general, when predicting on non-aliased data. On the other hand, when predicting on aliased data, there does appear to be slight improvement in SSIMx across all networks. For the SSIM in the other two flow directions, the values are considerably more varied and worse than in the principle direction. Finally, all networks have a substantially lower RE than the base, with the Dense versions having the lowest RE. The regression plots in Figures 8 and 9 show that there is exceptional correlation between the SR and synthetic HR images. The regression slopes are very close to one, in the principle flow direction and velocity magnitude plots, for the two CSP networks. Moreover, offset values are also essentially zero in all examples shown. Training with all non-aliased images appears to have an effect when predicting the Fig. 8 Regression and Bland-Altman plots for prediction on non-aliased (left) and aliased (right) data with 4DFlowNet-CSP-A. These plots are for each of the velocity components and magnitude (vx, vy, vz, and v , respectively, from top to bottom) between SR and synthetic HR images. Fig. 9 Regression and Bland-Altman plots for prediction on non-aliased (left) and aliased (right) data with 4DFlowNet-CSP. These plots are for each of the velocity components and magnitude (vx, vy, vz, and v , respectively, from top to bottom) between SR and synthetic HR images. higher velocity values in aliased images, with these values being slightly underestimated, as seen on the right in Figure 9 and confirmed by the Bland-Altman plots too. These plots also indicate minimal bias as the deviations appear constant, uniform, and are all less than 0.08ms −1 . For velocities within the constricted section, Figure 10 shows the correlation between SR and synthetic HR images. Although a slight underestimation bias seem to prevail, 4DFlowNet-CSP shows noticeable improvement from the baseline 4DFlowNet improving on the otherwise observed deviations at higher velocities. Note that the general trends and comments regarding these regression and Bland-Altman plots were present in all other evaluated networks too. In-vivo 4D Flow MRI Data Ethical approval for this study was granted by the Health and Disability Ethics Committee of New Zealand (17/CEN/226), and written informed consent was obtained from each participant. Two sets of in-vivo 4D flow MRI data (CH34 and CH37) were acquired and used to show how the network would perform, seen in Figure 11. These were only LR images, with no HR images available to compare the predicted SR images against. However, the SR images produced were compared against the baseline 4DFlowNet to help better understand the predictions. The SR images seem to have effectively removed noise from the LR image, with the new 4DFlowNet-CSP networks appearing to remove slightly more noise throughout the LR image than the baseline 4DFlowNet. Other networks performed similarly, with predicted SR images almost identical to the ones shown. The top set of data (CH34) was also segmented and visualised with ParaView [35], shown in Figure 12. Image stitching appears to have been performed correctly, with velocities within the fastest section preserved well. Fig. 12 Predicted SR images visualised in ParaView. The columns show the velocity magnitude, its vector field, and the streamline reconstruction within the largest magnitde section. Scale is in metres per second. Discussion A noteworthy consideration when analysing the results is that the focus should be more towards metrics and values obtained in the principle flow direction x. Since the peak velocities in the other flow directions, y and z, were almost 10 times smaller than that in the principle direction, networks would have difficulty differentiating between velocity and noise. This is evident in the y and z RMSE values, which were over half of their corresponding peak velocities. The y and z SSIM values were also much worse, potentially exhibiting strange behaviour in these low velocity fields too [36]. Synthetic 4D Flow MRI Data The main limitation of previous work has been related to insufficient data and flow characteristics [2,16,18]. To understand the effect of additional flow characteristics in the dataset, a wider range was used in this study. There was definitely noticeable improvement in the RMSE across all networks tested, particularly when predicting on aliased data. This indicates that with more geometries and hence flow characteristics, the network seems to generalise better and is more robust against data it has not seen. In terms of the synthetic LR and HR image pairs, the downsampling process and patch-based approach seemed to work effectively. The noisy LR patches enabled the network to learn noise removal while also enabling greater generalisation to unknown flow characteristics or geometries [2]. However, a small portion of these incorrect nonfluid velocity areas around the fluid domain appear to have been included in network training, seen in Figures 5 and 6. These were generated during the linear interpolation process when obtaining the HR images from the CFD data. This suggests that the binary mask for separating between the fluid and non-fluid regions, created using k-Nearest-Neighbours, needs improvement. An obvious consequence is that the network may learn incorrect flow characteristics and predict less accurately. Incorporating aliased patches into the training data seemed to work effectively as well, improving the RMSE across networks trained with all geometries. This was done by choosing VENC values lower than the maximum velocity, within a particular time frame, with a 10% probability. Note that this probability was chosen arbitrarily. For the VENC, if it is set too high, visualization of the jet may not be obtained and be inaccurate, as well as having poorer SNR. On the other hand, if it is set too low, flow characteristics may be lost and a mosaic pattern will be shown [37]. This means that for the time frames with aliasing, the velocities lower than the VENC would have been captured significantly more clearly, with only a small portion of high velocity characteristics being lost. For these time frames, the purpose would be to help the network better learn the flow characteristics in lower velocity fields, improving robustness and generalisability. However, this hypothesis has not been proven yet and more experiments on aliasing will be required to fully understand its effect on network performance. This would involve varying its probability of occurring as well as choosing VENC values considerably lower than the maximum velocity. Network Architecture A major drawback of residual blocks is that they suffer from limited learning ability [27]. This was seen in the results, in which the residual networks did not show much improvement even after adding many more geometries or introducing aliased images into the training data. Furthermore, the RE for residual networks seemed to plateau at around the 100 th epoch, whereas the RE for the other networks seemed to continue improving, although at a decreasing rate, even up till the last epoch. These observations can be seen in Figure 4. Despite this, the residual networks still performed well, with RMSE, SSIM, and RE values similar to the other networks, as well as having the fastest training and prediction times. This suggests that, when data is insufficient or very limited, residual networks may work best. With sufficient or abundant data, cross stage partial or dense networks may be preferred. The learning ability in these types of networks have a significantly higher ceiling than residual networks [27,28], with the only difference between these two network structures being the training and prediction times. Cross stage partial networks had training and prediction times almost as fast as residual networks, whereas dense networks were almost 1.6 times slower. Although training times may not be a crucial problem, clinicians and patients alike may require and desire fast prediction times. This leads to cross stage partial networks being preferred. Otherwise, if training and prediction times are not significant constraints, then dense networks may be the optimal choice. Furthermore, the growth rate for the dense and cross stage partial networks was set quite low, at a quarter of the number of feature maps in each convolutional layer within the residual blocks. This limits the learning ability of these networks, as there are significantly less parameters, so testing larger values of this hyperparameter will be beneficial and likely improve network performance. Similarly, only a quarter of feature maps were taken from the base input layer within each partial dense block within the cross stage partial networks, so larger values of this hyperparameter will likely improve performance as well. Despite the network architecture seeming to work quite effectively, there are still improvements that could be made on top of modifying the residual blocks. Presently, the network is not taking full advantage of the temporal aspect in 4D flow MRI. Modifying it to incorporate characteristics of recurrent neural networks [38] may help the network understand this temporal aspect better. This could be done by using predictions of the same patches from one or two time frames prior. Additionally, including physical properties of fluids may also help the network in learning flow characteristics, as seen in [17,18]. However, due to the bulky nature of the velocity data, which were 3D volumes for each velocity component, these ideas were not considered further as it would have been too costly to process given the available resources. Clinical Application The clinical motivation for the current study was the assessment of regurgitant or highly stenotic valvular flow; instances where quantification of regional velocities and changes in pressure act as effective biomarkers to describe the severity of disease. In these instances, and in clinical practice, assessment of peak velocities through the vena contracta are used to symbolize disease severity. In fact, derivation of regional pressure drops are routinely derived from such peak velocity measures using the so called simplified Bernoulli equation [39] (coupling peak velocities to effective pressure changes). In this light, our results bear possible clinical implications in that SR velocities effectively recovers HR reference measures. This holds true throughout the evaluation domain, and in the clinically important constricted section, underestimation biases associated with the original 4DFlowNet formulation were effectively suppressed as seen in Figure 10. The effect of the remaining minor deviation from a true 1:1 correlation between HR and SR data remains to be explored in a larger clinical setting, and in more complex flow scenarios -such as in the instance of regurgitant flow -higher-order methods might be required to derive pressure drops from measured velocity data [4,40]. Nevertheless, just as we have shown the potential of recovering functional hemodynamic behavior through the spatially challenging cerebrovascular space using SR 4D Flow MRI [18], the results of the current study bear similar potential in recovering clinically relevant hemodynamic metrics in aortic regurgitation. Limitations Although the number of geometries and flow patterns have significantly increased from previous studies, a wider range of characteristics can still be included. Additional data will diversify the data set further and improve the model's robustness and generalisability even more. On top of this, more testing on real 4D flow MRI data will be required to validate model performance. Currently, this validation is only done by visually analysing model predictions to see if they look reasonable and sensible. The preferred approach would to be validate quantitatively with a pair of corresponding LR and HR 4D flow MRI images, calculating metrics such as RMSE and SSIM to properly understand model performance. Lastly, several practical limitations can also be noted. Due to the limited GPU resources, training took approximately 60 days for all networks. With either more time or increased GPU resources, more networks could be trained by using different values for hyperparameters such as the growth rate and the probability that aliasing occurs. Moreover, networks could also be trained for more epochs, or until the error plateaus, to better gauge the learning ability of different networks. Finally, memory constraints were a factor too, as an upsample factor of 4 led to 64 times more usage in disk space. This would not be feasible for much larger upsample factors, so a different data representation may be required in future work. Conclusion In this study, it was shown how deep learning and computational fluid dynamics can be combined to effectively quantify hemodynamics for aortic regurgitation. 4DFlowNet was successfully adapted and modified to produce 4D flow MRI SR images with an upsample factor of 4. The results show that by adding more geometries and hence flow characteristics into the data set, the accuracy of 4DFlowNet predictions are improved. Moreover, the comparison of different network architecture suggested that the original residual network structure limits learning ability and can be further refined.
2021-11-24T02:16:28.287Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "68190f84a45c97d883056b49332984e8afd6c04a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "68190f84a45c97d883056b49332984e8afd6c04a", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
243848208
pes2o/s2orc
v3-fos-license
Microstructure, grain boundary evolution and anisotropic Fe segregation in (0001) textured Ti thin films The structure and chemistry of grain boundaries (GBs) are crucial in determining polycrystalline materials' properties. Faceting and solute segregation to minimize the GB energy is a commonly observed phenomenon. In this paper, a deposition process to obtain pure tilt GBs in titanium (Ti) thin films is presented. By increasing the power density, a transition from polycrystalline film growth to a maze bicrystalline Ti film on SrTiO$_3$ (001) substrate is triggered. All the GBs in the bicrystalline thin film are characterized to be $\Sigma$13 [0001] coincident site lattice (CSL) boundaries. The GB planes are seen to distinctly facet into symmetric {$\bar{7}520$} and asymmetric {$10\bar{1}0$} // {$11\bar{2}0$} segments of 20-50~nm length. Additionally, EDS reveals preferential segregation of iron (Fe) in every alternate symmetric {$\bar{7}520$} segment. Both the faceting and the segregation are explained by a difference in the CSL density between the facet planes. Furthermore, in the GB plane containing Fe segregation, atom probe tomography is used to experimentally determine the GB excess solute to be 1.25~atoms/nm$^{2}$. In summary, the study reveals for the first time a methodology to obtain bicrystalline Ti thin films with strong faceting and anisotropy in iron (Fe) segregation behaviour within the same family of planes. Introduction Titanium (Ti) and its alloys have a high strength-to-weight ratio, excellent corrosion resistance and biocompatibility [1]. These properties make it an attractive structural material, mainly in the aerospace and biomedical industries [2]. Despite possessing the majority of the desired properties for automotive industries, the most significant impediment to Ti alloys' widespread use has been their high cost. Many alloying additions have been investigated in recent decades to not only improve mechanical and high-temperature properties but also to make them affordable [3,4]. Ti exhibits an allotropic transition from the low-temperature hexagonal close-packed (hcp) α-phase to the high-temperature body-centered cubic (bcc) β phase at 882 • . The alloying elements added to Ti are classified as α stabilizers (Al, Sn, Ga, Zr, C, O, and N), β-stabilizers (Fe, V, Mo, Nb, Ta, and Cr), or neutral elements (Zr, Sn, and Si), based on the phase they stabilize. Iron (Fe) is one of the most cost-effective β stabilizing alloying elements and has therefore attracted plentiful attention [5,6,7,8]. The addition of Fe can result in its segregation at grain boundaries (GBs), β-phase formation, or precipitation of intermetallic compounds. Ti-Fe alloys are either β-phase or a mixture of αand β-phase, where the phase-fraction depends on the alloy composition [9]. When added in excess of 2.5 at.%, Fe is known to form β-flakes at the GBs which are deleterious to the material properties [10]. Additionally, two metastable phases, α -Ti (hcp), and ω-Ti(Fe), have been observed during martensitic transformation of Ti-Fe alloys [11,12,13,14,15]. The ω-Ti(Fe) is a high-pressure Ti phase which is retained at low pressure in Ti-Fe alloys. However, the role of Fe in the α → β and in β → ω phase transition is not clear yet [16]. A change in the chemical composition of the interfacial region changes the thermodynamic driving force for solid-state phase transition [17]. Fe segregation at GBs in Ti alloys can also lead to the formation of TiFe or Ti 2 Fe intermetallic compounds [18]. These binary intermetallics have been considered as a potential choice for solid-state hydrogen storage applications [19]. Likewise, Ti-Fe alloys have also been extensively used to fabricate near-net shape parts using the blended elemental powder metallurgy (BEPM) route to achieve cost reduction [20,21]. Overall, a multitude of phase transitions have been realized in Ti-Fe alloys but numerous questions on how Fe influences these phase transformations remain unanswered. Some of the answers are likely to lie in the GB segregation of Fe that leads to precipitation or other phase transitions. In the case of α-Ti, the maximum solubility of Fe is less than 0.05 at.% [22,23]. Consequently, when added in excess of solubility limit, Fe must either form secondary phases or segregate at the GBs. Although extensive studies have been performed on GB segregation in many fcc and bcc metals and alloys, limited reports have discussed GB segregation in Ti or other hcp metals. Oxygen and Carbon have been shown to weakly segregate at the Ti GBs although oxygen has a high solubility in α-Ti [24]. In β-Ti, Fe and Cr have been shown to segregate at the GBs and act as grain refiners when cooling [25]. The segregation of Fe at Ti GBs has repeatedly been suggested to be responsible for pinning the GBs, leading to remarkably stable nanocrystalline Ti [24,26]. Recently, density functional theory (DFT) calculations also confirmed the presence of a high driving force for Fe to segregate at α-Ti GB [24]. However, it is unclear how Fe pins the GBs and whether or not secondary phases contribute to this. In many cubic materials segregation is observed to strongly vary based on the nature of the GB [27]. Nevertheless, no anisotropy in segregation with respect to the GB type in Ti or other hcp metals has ever been reported either experimentally or theoretically. Such a systematic study of the influence of GB character on material properties requires a template-based approach to obtain desired GBs. For many metals, bicrystalline samples with predefined GB parameters have been grown using the vertical Bridgman technique for specific GB property studies [28]. However, in Ti, the hcp-bcc allotropic transition makes it impossible to fabricate bicrystals to form specific desired GBs. Therefore, a novel thin film deposition route to obtain bicrystalline Ti has been established here. Similar bicrystalline thin films have been demonstrated earlier for other metals like Cu and Au [29,30,31,32]. To deposit such films, a thorough understanding of the impact of various deposition parameters on the microstructure of the film is required. In physical vapour deposition, the degree of ionisation of the plasma particles determines the ion flux towards the growing film [33]. A high ion fraction in a discharge is achieved by promoting the electron impact ionization which is achieved by using plasma of high electron density and higher temperature. Such a plasma is formed by using a pulsing unit at the target. Textured Ti films with (0002) out-of-plane orientation were recently reported using a similar deposition route [34]. In the following sections, firstly a template-based approach to obtain Σ 13 GBs in thin films of Ti on SrTiO 3 substrate is established. Pulsed magnetron sputtering using a commercially pure Ti target with trace Fe impurities is used to obtain a bicrystalline thin film with columnar grains. Subsequently, electron backscatter diffraction (EBSD) in a scanning electron microscope (SEM) is used to characterize the thin film microstructure. The GBs contained in the thin film are analyzed in detail using scanning transmission electron microscope (STEM) and are observed to be faceted into symmetric and asymmetric segments. Using high resolution energy dispersive spectroscopy (EDS) analysis, the segregation of Fe to these topographically complex GBs is explored. Thin film deposition Thin films of Ti were deposited onto 10 × 10 mm 2 SrTiO 3 (001) substrates (Crystal GmbH, Germany) using pulsed magnetron sputtering in a commercial deposition system (Ceme Con AG CC 800-9). A rectangular 500 × 88 × 10 mm 3 Ti target of above 99 wt.% nominal purity (grade 2) with 0.2 wt.% Fe (0.17 at.%), 0.18 wt.% O (0.54 at.%) and 0.1 wt.%C (0.54 at.%) as a major impurity was positioned at a distance of 10 cm to the substrate holder. The base pressure was 2.2 × 10 −6 mbar and increased to 3 × 10 −6 mbar after heating the substrate to 600 • C using radiation heaters. During deposition, an Ar flow rate was set to 200 sccm leading to a working pressure of approximately 3.8 × 10 −4 mbar. A Melec SIPP2000USB-16-500-5 power supply was used and the substrates were at floating potential. Four different deposition conditions are discussed in the following article. The first film was deposited using DC Sputtering with 250 W power for 2.5 h where a deposition rate of 1.33 Å/s was obtained. The pulsing unit was not used in this deposition. For the subsequent three films, pulsed magnetron sputtering was used. In pulsed magnetron sputtering, the conventional sputtering source is used in a pulsing mode with a predetermined pulse duration ranging from 1 µs to 1 s to increase the current density. This additional degree of freedom to adjust the pulse duration can be used to tailor desired microstructures [35,36]. The target voltage and current variation during a single pulse is shown in Fig. 1. The time average power was set to 1500 W with a pulse duration of t on /t o f f of 200/1800 µs. This led to the peak current (I ion ) of 40 A resulting in a dense film and a peak target power density of 46 W cm −2 . The three films were grown under identical conditions by keeping all the film deposition parameters unchanged but changing the post-deposition annealing duration between 2h, 4h and 8h at 600 • C to investigate its influence on the film microstructure, texture and grain size. The annealing was performed immediately post turning off the plasma without breaking the vacuum because Ti is known to be highly prone to oxidation. With a deposition rate of about 8.3 Å/s, a film thickness of 1.5 µm was obtained in ∼30 min. The venting temperature was always <70 • to reduce surface oxidation [37]. Microstructural characterization The preliminary investigation was carried out using a light optical microscope (LOM) to check for any cracks/ defects on the surface. Subsequently, orientation of all grains was mapped using EBSD in an SEM. An in-plane lift-out technique in a dual-beam focused ion beam (FIB) instrument (Thermo Fisher Scientific Scios 2 HiVac) with Ga + -ion source was used to extract a transmission electron microscopy (TEM) lamella. The beam current was gradually reduced in several steps starting from 1 nA at 30 kV for coarse milling to eventually 27 pA at 2 kV for final polishing to obtain a thickness of <100 nm. Probe corrected STEM in a Titan Themis 80-300 (Thermo Fischer Scientific), was used at an acceleration voltage of 300kV. A semi-convergence angle of 23.8 mrad was used for imaging. With a camera length of 100 mm, collection angles of 78-200 mrad and 38-77 mrad were obtained for the high-angle annular dark field (HAADF) and the annular dark field (ADF) detectors, respectively. Thermo Scientific ChemiSTEM Technology using four in-column Super-X detectors was used with a beam current of ∼50 pA for the EDS analysis. Laser-pulsed atom probe tomography (APT) was performed in a LEAP T M 5108XR (CAMECA) at a repetition rate of 200 kHz, a specimen temperature of about 50 K, a pressure lower than 1 × 10 −10 Torr (1.33 × 10 −8 Pa) and a laser pulse energy of 20 pJ. The evaporation rate of the specimen was 5 atoms per 1000 pulses. Datasets were reconstructed and analyzed with the AP suite 6.1 software based on the voltage curves. Using the results from APT, the interfacial excess was experimentally determined by selecting a region of interest (ROI) across the interface where the solute is segregated and plotting the so-called, ladder diagram. A ladder diagram is established by taking the total number of atoms in the ordinate and the integral of solute atoms in the abscissa. A linear fit within the concentration profile of the two grains can be extrapolated to the Gibbs dividing surface (GB surface) to find the solute content in both the grains, N a and N b . Using this, the Gibbsian interfacial excess is calculated as: Additional details are described in [38]. The detection efficiency was 0.52. Evolution of thin film microstructure and grain boundaries The Ti film deposited on SrTiO 3 (001) at 600 • C using DC sputtering is characterized by SEM and EBSD as shown in Fig. 2. Following the deposition, the film was post-annealed at 600 • C for 2 h in the deposition chamber. The secondary electron (SE) image in Fig. 2 a) reveals a rough surface and small grains. Fig. 2 b) shows the crystallographic orientation map based on the [0001] inverse pole figure (IPF) obtained using EBSD. The grain size is measured to be ∼500 nm using the line intercept method [39]. To visualize the change in misorientations, a black line is highlighted inside the orientation map in Fig. 2 b). The misorientation between every point on the line and the first point (origin) on the line is displayed as a misorientation profile chart. In the profile, a range of varying orientations is observed in Fig. 2 c). Using the [1010] pole figure in Fig. 2 d), two dominant textures are observed. First, a strong (1011) fiber texture is seen revealing the presence of all possible in-plane rotations. These grains are highlighted in purple in both the orientation map and the pole figure. Second, a (0002) texture is observed with only two in-plane grain rotations, with each orientation highlighted in red and blue in the pole figure. The pole figure comprises a single point per grain that is weighted by grain size; thus, the distribution of both orientations can be seen to be approximately equal. It is known that the (0002) plane has the lowest surface energy in Ti due to the highest atomic density and (1011) has the least strain energy due to the lowest elastic modulus [40]. Hence, the two orientations seem to compete and both are observed by EBSD in the film deposited by DC Sputtering. To understand the influence of higher ionization of the plasma and increased adatom mobility on the film microstructure, pulsed magnetron sputtering was subsequently used to deposit three additional films. A Ti film was deposited using pulsed magnetron sputtering at 600 • C and post-annealed at the same temperature for 2 h. A larger grain size (see Table 1) is seen for the film deposited by pulsed magnetron sputtering compared to the film deposited using DC sputtering. Likewise, for the film annealed for 4 h, a much smoother surface and larger grain size is observed, as seen in Fig. 3. A cross-section FIB lamella was prepared to resolve the microstructure of the film in growth direction. The majority of the grains were found to be columnar. In Fig. 3 b), the orientation map obtained from EBSD confirms that almost all of the grains have a [0001] surface plane normal. The pole figure shown in the Fig. 3 d) is obtained from the same data set. It confirms the ∼ 30 • misorientation corresponding to Σ13 [0001] GBs. Furthermore, as previously observed, all of the grains with (0002) surface plane normal have either of the two orientations shown in the pole figure, which are denoted by red and blue. As a result, the film has a mazed bicrystalline microstructure, as seen in the IPF. The same is also confirmed by the presence of only two alternating orientations in the misorientation profile chart in Fig. 3 Fig. 3 d) to point at the additional reflections at ∼25 • away from the center. As every reflection in the pole figure is weighted by the grain size, these additional reflections correspond to small grains of other orientations. The high peak current of 40 A during pulsed magnetron sputtering in contrast to 5 A during DC sputtering is responsible for the shift of surface plane orientation from (1011) and (0002) in the DC sputtered film to all the grains being only (0002) oriented in the pulsed magnetron sputtered film. c). A purple dashed-circle is drawn in When the post annealing time is extended to 8 h, the film surface remains smooth and a further increase in grain size to 3.76 ± 2.4 µm is measured using EBSD, as seen in Fig. 4. The increase in grain size with respect to the annealing duration is tabulated in Table 1. EBSD also verifies that the film continues to show the same mazed bicrystalline microstructure as observed previously. When comparing the films annealed for 4 h and 8 h, the isolated grains of other orientations are seemingly overgrown by the larger grains having (0002) orientation in the 8 h sample. The novelty of both the pulsed magnetron sputtered films is that, not only they are bicrystalline and thus strongly textured but also all the grains are completely columnar from the substrate to the film surface, as seen in the SEM-STEM image of a cross-section of the film in Fig. 4 b). Such textured films are of great interest, for example in microelectronics for applications such as diffusion barriers in integrated circuits [41]. In the present context, the columnar grains are necessary to ensure that the GBs are edge-on in plan-view. The pole figure in Fig. 4 d) again shows the presence of only (0002) out-of-plane orientations as discussed previously. After 8 h of annealing, no additional fringe orientations are observed. The orientation maps show that one of the two (0002) orientations present is a continuous large grain that extends all over the substrate. This orientation is here on wards referred to as 'G1'. The other orientation, 'G2', is present as small islands surrounded by the G1 grain. This is the typical microstructure observed in other bicrystalline films such as (110) Au film on (001) Ge substrate [42]. Using the TSL-OIM software, a partition of only G2 orientation is created from the acquired EBSD data for all three films deposited using pulsed magnetron sputtering. The G2 oriented grains are highlighted in the Fig. 5 a), b) and c) for 2 h, 4 h and 8 h of annealing, respectively. As a function of annealing time, the G1 orientation grows at the expense of G2 orientation resulting in an increase in the single crystallinity percentage, as shown in Fig. 5 d). Grain boundary faceting The Σ13 GBs in Fig. 3 b) and Fig. 4 c) are observed to be continuously curved and to form a mazed bicrystalline mi- crostructure. Such GB curvature is often accommodated by faceting if the inclinational dependence of the grain boundary energies is anisotropic [29,11]. Upon investigation at a higher magnification, the GBs are observed to be faceted, as seen in Fig. 6 a). The plane normal of each facet plane is 30 • apart from each other. A selected area diffraction pattern is obtained from the same region using TEM to index the GB planes. Because the GBs are all edge-on, as previously demonstrated in Fig. 4 b), the GB planes can be indexed by simply locating the intersection of the GB plane normals to the great-circle in the (0001) sterographic projection of Ti. Using this, the GB facets are indexed to be {7520} symmetric planes. From the diffraction pattern it is also apparent that {7520} is the only conceivable symmetric GB plane family in Σ13 [0001] hcp GBs. GB planes with any other Miller indices would be asymmetric. A preference for the symmetric GB planes is readily apparent. In another instance, as seen in Fig. 6 c), although the GB facet normals are still 30 • apart, the planes were indexed to be low-index asymmetric {1010} // {2110} planes. The diffraction pattern shown in the Fig. 6 d) confirms the asymmetricity of the facets. Ac- cording to authors' observations, out of over ∼200 µm of GB less than ∼30 µm of the GB is asymmetric, which corresponds to less than ∼15% of the GB being asymmetric. Anisotropic Fe segregation in symmetric GB facets Fe was present as an impurity in the sputtering target therefore it is of interest to examine where it is present in the deposited films. In the film deposited using pulse magnetron sputtering and post-annealing for 8h at 600 • , the Σ13 [0001] Ti GB is seen to be composed of {7520} symmetric GB facets, as shown in Fig. 7 a). The facets are similar to the GB faceting discussed in Fig. 6. The elemental distribution map of Fe, acquired using STEM-EDS at 300kV, reveals a preferential segregation of Fe to every alternate symmetric facet. In Fig. 7, Fe segregation is observed at (5720), (7250), and (2570) facets, but Fe is not detected at (7520) and (5270) facets. The counts of the Fe signal in the EDS spectrum are integrated along the highlighted arrow over the marked region as seen in Fig. 7 b). Fig. 7 d) and Fig. 7 e) show line profiles across the corresponding facets to clearly illustrate the segregation of Fe at the (2570) GB plane and the absence of Fe-segregation in the (5270) facet. The FWHM of the spacial distribution of Fe in the Fig. 7 e) is ∼0.15 nm confirming the segregation of Fe is limited to the GB. To quantify the amount of segregation APT was used. Firstly, the GB facet containing Fe was targeted to be lifted-out using the conventional FIB sample preparation technique and field evaporated in laser-mode. After the reconstruction of data using AP Suite, the atom distribution maps of Ti and O are obtained, as shown in Fig. 8 a). Both of them are seen to be distributed uniformly in both the grains and the GB region. Fe is seen in Fig. 8 b), c) and d) to distinctly segregate to the GB. To delineate the Fe GB segregation, a 0.5 at% Fe isoconcentration surface is plotted in Fig. 8 b) and as seen in Fig. 8 d), the GB is edge-on. A cylinder of 30 nm diameter and 50 nm length is highlighted as the selected region of interest (ROI). Although the GB extends over the entire cross-section of the APT tip, Fe is seen enriched at only a fraction of the GB area. The composition along the ROI in both Fig. 8 b) and Fig. 8 c) are plotted in Fig. 8 e) and Fig. 8 f), respectively. Negligible Fe segregation is observed in Fig. 8 e) whereas significant Fe segregation of up to ∼0.5 at.% is observed in Fig. 8 f). The Fe segregation is distributed to a width of ∼8 nm (FWHM <4 nm). The segregation width is significantly larger than the expected GB width due to the artefacts from field evaporation and aberrations in the trajectory of ions [43]. Nevertheless, this apparent increase of GB width is inconsequential for a homo-phase boundary as the concentration on both sides of the Gibbsian dividing surface can be considered to be identical. Although not highlighted in the Fig. 8 b), c) and d) for clarity, three ROIs of the same dimensions were taken for better statistics. As seen in the Fig. 8 g), plotting the number of Fe atoms against the total number of atoms of all elements in the region of interest allows us to measure the number of GB excess Fe atoms per unit area of the interface, N Fe . Using the equation Eq. (1), Γ Fe is found to be 1.25 ± 0.1 atoms/nm 2 . Since the {7250} GB has ∼8-10 at/nm 2 (ambiguity arises because of the dependence of planar-atomicdensity on the width of a high-index GB plane), the amount of segregation can also be described as ∼0.2 monolayers, assuming that the Fe segregation is limited to the GB plane. The relationship between misorientation and GB excess property measured using APT has been discussed in cubic metals [27,44], however no report for hcp metals was found in the literature. To emphasize that the segregation of Fe is limited to the GB plane, and to show its distribution, an areal density plot of Fe in the GB plane is shown in Fig. 8 h). Thin film deposition In a plasma with a large degree of ionization, the ion flux to the substrate is larger than for discharges with a low degree of ionization [36]. The magnitude of ion current affects the morphology evolution which is evident when comparing the SE image of the film deposited by DC sputtering in Fig. 2 a) and the SE image of the film deposited by pulsed magnetron sputtering in the Fig. 3 a). The incoming sputtered atoms from the target when adsorbed on the substrate surface are called adatoms. Due to the dense plasma in pulsed magnetron sputtering, adatom mobility to low surface energy sites with high coordination is promoted. Enhanced surface diffusion eliminates the voids and surface mounds and consequently reduces the surface roughness of the grown films. It also leads to a smoother surface finish [45]. Increase in surface smoothness and density of films on using higher duty cycles have been reported earlier in TiO x [33]. Additionally, the lowest surface energy plane, (0002), dominated over the lowest strain energy plane, {1011}. The high flux of ions due to pulsing leads to the increased momentum transfer between the plasma and the condensed metal atoms. Such a high flux increases the mobility of surface adatoms and accommodates them on planes of the lowest surface energy. A similar change in texture evolution in changing the deposition method from DC sputtering to pulsed magnetron sputtering favouring the high atomic density surface planes has also been reported in other materials [46,47,48]. Moreover, all the pulsed magnetron sputtered films were observed to have columnar grains. This can be explained using the structure zone model (SZM). The SZM is commonly used to determine the dependence of film microstructure on the discharge pressure and the homologous temperature [49]. The film microstructure according to this model is categorized into four zones, namely Zone 1, Zone T, Zone 2, and Zone 3. Detailed discussion on the model can be found in [50]. In the present film, the deposition temperature of 600 • C lies in the middle of zone T and zone II (T s / T m ∼ 0.4; T s : deposition T, T m : melting point of target ) which is expected to lead to columnar grains. However, the most important characteristic of these films is that all of the GBs are identified as Σ13. It must be noted that even with a detailed EBSD scan with a step size of <10 nm, the GBs seem to be rounded, forming a meandering bicrystalline microstructure. Although pulsed magnetron sputtering of Ti has been used earlier to obtain Ti films having similar smooth surface and columnar grains, the GBs were not characterized in these works [51,33]. Grain boundary faceting Faceting is the dissociation of a GB into segments with different GB plane inclination but same overall misorientation. It principally occurs to reduce the overall GB energy. The total energy of a faceted GB is the sum of the energy of individual GB segments and their interaction energy at the facet junctions. As faceting leads to an increase in GB plane area, the facets must have lower energy than the parent GB for faceting to occur. Although GB faceting has been widely reported in many cubic metals [52,53,54], to the best of the authors' knowledge, no experimental evidence of GB faceting has been reported for Ti. Here, we show that the GB plane which would otherwise be asymmetric due to its continuous curvature, dissociates into distinct facets. Since all the grains are columnar, it is evident that all the GBs in a basal-plane textured film are prismatic in nature. Moreover, there is a competition between the low index asymmetric prismatic {1010} // {2110} GB planes and the symmetric prismatic {7520} GB planes. The prismatic planes have been shown to be the preferred GB facet plane in other hcp materials [55,56]. A 3D-EBSD study on bulk-Ti reported a large fraction of grains with misorientation ≤ 30 • having prismatic GB planes with a preference for {7520} [56]. Therefore, not only are the {7520} GBs preponderant in thin films, but they are also frequently observed in the bulk commercially pure Ti. Also, pris- . c), d) ROI passing through the Fe-enriched GB region is depicted in both front and side view, respectively (in teal). e), f) The composition profile of Fe plotted against the length of the highlighted ROI exhibits ∼no Fe segregation and Fe-segregation, respectively. The Fe-segregation is limited to GB width and also limited to only a fraction of the GB. g) The ladder diagram shows GB segregation of Fe with equally low solubility in the grain interior on both sides of the interface; the Gibbsian GB excess is calculated as explained in [38]. h) The in-plane atomic density distribution of Fe at the GB plane. matic planes are preferred in the β − → α martensitic phase transformation in Ti [57]. The {7520} prismatic planes are also predominant in Σ13 [0001] GBs in Mg [58]. Using atomistic calculations, Ostapovets et al. were able to show that the minima of [0001] tilt GB energy corresponds to {7520} [58]. Although, similar GB energy calculations via MD simulation for Ti [0001] tilt-GBs are missing, it is expected that Ti could follow a similar trend as in Mg. Therefore, the observed GB faceting in Fig. 6 a) and Fig. 7 a) into {7520} symmetric segments can be considered to be a result of the GB energy-minimization in hcp materials. To develop a thermodynamic understanding of the anisotropic segregation of Fe observed in the Fig. 7, it is necessary to know the atomic structure and the local energetics of the particular GB. The atomic structure investigation is beyond the scope of the present study but a simpler approach to rationalize the observations can be used. The CSL model is the most widely applied tool for classifying GBs. It distinguishes grain misorientations that place a large fraction of the lattice sites of the two grains in coincidence from the remaining 'general' misorientations. The sigma value represents the inverse of the number of the coinciding sites. If the GB plane passes through the coinciding points, then such a GB plane can have low energy. The term, 'planar coincident site density' (PCSD) is used to quantify the density of CSL sites on the GB plane. In a SrTiO 3 Σ3 (111) twist boundary, the high PCSD was held responsible for the observed GB plane [59]. To illustrate the role of PCSD in GB plane selection, a dichromatic pattern of hcp (0001) is drawn in Fig. 9. The (7520) symmetric plane that is highlighted in 'gold' has the highest PCSD, Γ = 0.43, in Σ13 [0001] hcp GBs. This is followed by the (5720) plane highlighted in 'dark-blue' which has a Γ = 0.33. Caution must be exercised because both of these planes belong to the same family of planes and should thus be interchangeable. Although the {7520} consists of twelve planes, six of them have a lower PCSD than the remaining six in the Σ13 misorientation. This peculiar behaviour is an outcome of the six-fold symmetry of the basal plane in hcp. Additionally, when considering the prismatic planes with the smallest Miller indices, {2110} and {1010} can be seen to have a much lower PCSD in the Fig. 9 b). However, the selection of GB planes cannot be entirely described by PCSD. In several cubic materials, no dependence of GB plane selection on PCSD is seen [59]. In brass and nickel, a systematic GB plane analysis demonstrated that the dependence of interplanar spacing of the GB planes, d e f f , is the more critical criterion than PCSD in the selection of the GB planes [60]. For a symmetric GB, the d e f f is same as the d-spacing of the GB plane, while for an asymmetric GB, the d e f f is given by: where d 1 and d 2 are the d-spacing of the GB planes for each grain. The d e f f is a means to generalize the d-spacing criterion for the asymmetric boundaries [61]. The energy of an unrelaxed boundary increases as the d-spacing decreases, because the atoms with the shortest d-spacing contribute the most to the boundary energy. Consequently, low-index GB planes due to their larger d-spacing are preferred [62]. In terms of probabil-ity, there are many possible asymmetric GBs and limited symmetric GBs. But once a low-index plane for one grain is fixed, the index of the other plane is constrained by the misorientation. As a result, most asymmetric GBs have a low d e f f and are hence unfavorable. This is why symmetric GB planes appear more frequently than random distribution. In case of prismatic planes of Ti, {1010} has the largest d-spacing (2.55 Å) followed by {2110} (1.475 Å). This leads to the largest d e f f for the asymmetric (1010) // (1120) GB to be 2.01 Å, which is much higher than 0.409 Å for {7520} plane. In fact, another asymmetric GB (3210) // (4510) is only 3 • in-plane rotation away from the symmetric {7520}. The d e f f for (3210) // (4510) is 0.76 Å, that is also larger than the 0.409 Å for {7520}. In-spite of that, the symmetric GB plane, {7520} is observed more frequently in the present study and also in Mg and Ti-64 alloy [58,56]. Other than the influence of extra-ordinarily high PCSD, one can anticipate that the atomic structure of the GB and the interaction energy of individual facets has a vital influence in the GB plane selection. It is now well established that GBs act as a 'phase' in themselves and have an atomic structure that is distinct from the two abutting grains [30,63]. In all of existing literature that discusses the rules for selection of the GB [62,59,60], the atomic structure of GB has never been taken into consideration. However, the structure of the GB can play a major role in determining its thermodynamic properties. The structure of the GB would also determine its ability to accommodate defects, equilibrium solute segregation and their influence on the GB energy. Therefore, although a combination of PCSD and d e f f can be used to argue the stability of certain GB planes, further investigation of the atomic structure of the {7520} is needed to establish the reason for stability of the high-index symmetric GB plane over the low-index asymmetric plane, which is an ongoing work. Grain boundary segregation Solute segregation at GBs is known to largely influence the mechanical behaviour of metals either negatively, such as by embrittlement, or positively, by pinning the GBs and thus restricting grain growth, thereby strengthening it [64,65,66]. Solute segregation can also lead to faceting of the GB [67,68]. The segregation of Fe in Ti is a known phenomenon. Random high angle GBs in Ti have been shown to be stabilized by the segregation of Fe in the substitutional sites [24]. Fe segregation has also been reported in commercial Ti alloys with no analysis of the influence of GB type on the segregation behaviour [69,25,70]. In the present study, Fe was observed to segregate preferentially in few selected symmetric facets in Σ13 (0001) Ti GB, as seen in Fig. 7. All the symmetric facets were apriori expected to show isotropic behaviour as they belong to the same family of planes and are bound by the same two grains. The observed peculiar anisotropy is to a first approximation rationalized using the PCSD. The GB planes (7520) and (5270) have a higher PCSD and are seen to have lean or no segregation whereas, the GB planes (5720), (7250), and (2570) have a lower PCSD and have much higher Fe segregation. Clearly, a lower PCSD seems to favour Fe solute segregation in Ti. Beyond characterizing the GB plane, it is of a great interest to quantify the amount of segregation. Such studies began in early 1980s using surface analysis techniques like Auger electron spectroscopy (AES) and secondary ion mass spectroscopy (SIMS) of the fracture surfaces [71,72,73,74]. With manifold advancement of the analytical power of STEM, EDS and electron energy loss spectroscopy (EELS) are now regularly used to quantify lean segregation of even less than 0.01 monolayer at the GB [75,76,77,76]. More recently, state-of-theart STEM techniques and atom probe tomography have been utilized in a correlative fashion to obtain spatial and chemical information from the same region down to almost atomic scale [78,27,79,80,44,81]. A similar approach was used here to site-specifically lift-out a symmetric GB and quantify the Fe segregation as seen in Fig. 8. Although, the EDS clearly shows Fe segregation and partitioning to different GB segments, it needs however long counting times and thus may cause redistribution of Fe like seen in other metallic systems [82]. Therefore, APT is used for the quantification of solute concentration. Additionally, the distribution of concentration in the thirddimension cannot be obtained in the STEM-EDS. It also helps to find any additional scarce impurity present in the material. From the atom distribution in Fig. 8, a uniform distribution of Ti and O is confirmed. Although the concentration of O is high, ∼28at.%, it is measured to be uniformly distributed over the entire tip volume. The high O content in the film is due to the high solubility limit (32 at.%) of O in Ti [83]. Because the deposition chamber had a vacuum of only 2.2 × 10 −6 mbar, and Ti is widely used as a getter for O, the film is expected to have a high dissolved O content. However, no oxides or other secondary phases are detected. Most importantly, the distribution of O is not altered at the GB and can therefore be assumed to not have an influence on the Fe segregation. Although the GB plane cannot be found from the APT data, we know that the film has only Σ13 GBs, with only two possible GB planes. As discussed earlier, the symmetric {7250} GB is present ∼20 times more often than the (1010) / (1120) asymmetric variant, therefore we assume the captured GB to be the symmetric {7250} GB. This assumption is also emphasised by the fact that the area density distribution in the Fig. 8 h) shows the Fe distribution is restricted to ∼40-50 nm, which is about the same as the length of symmetric facet, as seen in Fig. 6. Also, the areal density distinctly shows that Fe is uniformly distributed across the facet / GB plane. The decrease in the density of the Fe in the right part of the GB can be attributed to the commencement of the adjacent facet which is devoid of Fe. A striking similarity confirming the Fe-segregation restricted to symmetric GB segment can be seen in both Fig. 7 d), e) and Fig. 8 e), f). The subsequent measurement of interfacial excess using a 'ladder diagram' gave a Γ Fe of 1.25 at/nm 2 or 0.2 monolayer. The GB excess measurement has been used as a method to scrutinize phase formation in the GBs [84]. 0.2 monolayer corresponds to one in every five atoms at the GB plane being Fe. With the spatial resolution of APT, there is no way to ascertain if the present observation is GB segregation (either by strain or chemically induced by bonding/charge transfer) or a GB phase transformation. Based on the local arrangement and bonding of Fe, its influence on the material properties could largely vary. This necessitates the requirement of additional experiments using STEM to discern the atomic structure of the GB with and without Fe, which is an ongoing work. Conclusion Following the deposition of bicrystalline Ti thin films, faceting and Fe-segregation in Σ13 [0001] GBs of Ti thin films are explored here for the first time. Following are the important findings from this work: 1. A novel template based thin film deposition pathway for obtaining columnar grains containing tilt GBs of Ti is established using pulsed magnetron sputtering on SrTiO 3 (001) substrates at 600 • C followed by post-annealing at the same temperature for 2 h, 4 h and 8 h. 2. The ion current density during magnetron sputtering is seen to modify the texture completely from mixture of (1011) and (0002) to only (0002) owing to the high adatom surface diffusion. 3. EBSD analysis reveals a bicrystalline film with mesoscopically curved GBs having a very high fraction of Σ13 [0001] CSL orientation. Such a textured film with well defined CSL GBs is demonstrated for the first time. 4. The seemingly maze-like tilt GBs are verified to be columnar in nature and are shown to be faceted frequently into symmetric {7520} and sporadically asymmetric {1010} // {2110} facets. The selection of GB planes during faceting is considered to be a combination of high planar coincidence site density and high effective interplanar spacing (d e f f ). 5. EDS analysis reveals a distinct preferential Fe segregation in every alternate {7520} GB facet while {5720} GB remains Fe-lean. This is explained by the difference in the planar coincidence site density of the two planes within the same plane family which is unique to the basal plane six-fold symmetry in the hcp structure. 6. The Fe content is measured using atom probe tomography. Fe-segregation is observed to be limited to a fraction of the GB area. 7. The anisotropic segregation is expected to influence second phase nucleation, GB migration, and other critical material properties opening scope of similar investigations in other GBs and other hcp metals.
2021-11-09T02:16:34.830Z
2021-11-08T00:00:00.000
{ "year": 2022, "sha1": "1d2c1416a3a7140424e6021e2cb18efaec8c7736", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2111.04606", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1d2c1416a3a7140424e6021e2cb18efaec8c7736", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255337270
pes2o/s2orc
v3-fos-license
Enhancing the Spun Yarn Properties by Controlling Fiber Stress Distribution in the Spinning Triangle with Rotary Heterogeneous Contact Surfaces Control of tension distribution in the spinning triangle region that can facilitate fiber motion and transfer is highly desirable for high quality yarn production. Here, the key mechanisms and a mechanical model of gradient regulation of fiber tension and motion with rotary heterogeneous contact surfaces were theoretically analyzed. The linear velocity gradient, effected on a fiber strand using rotary heterogeneous contact surfaces, could balance and stabilize the structure and stress distribution of spinning triangle area, which could capture exposed fiber to reduce hairiness formation and enhance the internal and external fiber transfer to strengthen the fiber utilization rate. Then, varied yarns spun without and with the rotary grooved and rotary heterogeneous contact surfaces were tested to compare the property improvement for verifying above-mentioned theory. The hairiness, irregularity, and tensity of the yarns spun with rotary heterogeneous contact surfaces spun yarns were significantly improved compared to other spun yarns, which effectively corresponded well to the theoretical analysis. Based on this spinning method, this effective, low energy-consuming, easy spinning apparatus can be used with varied fiber materials for high-quality yarn production. Introduction Cotton, viscose, polyester, and other fiber materials could be made into high-quality yarns through spinning technologies. The strand fibers are first delivered from the front roller nip, and then transferred inside and outside by the twist transmission to form spun yarns during the spinning process [1,2]. Nevertheless, the phenomenon of superfluous hairiness [3][4][5][6] and low tensity on traditional ring yarn restricted the development of high-quality yarn production. Crucially, the problems of yarn hairiness and tensity are mainly caused by poor fiber motion regulation and uneven fiber stress distribution in the spinning triangle area [7][8][9], failing to fully control the transfer of fibers into the yarn body, which caused the occurrence of neps or pilings from exposed fibers, affecting yarn and fabric qualities. Recently, novel spinning methods [10,11], such as siro spinning, offset spinning, and compact spinning, have been developed to produce high-quality ring yarns with a tight helical structure and preferable yarn performance. Siro-spinning methods strengthened the yarn properties by converging twin pre-twisting strands to form a helical configuration in a single yarn [12,13]; hence, the siro spun yarns showed lower hairiness and higher tensity than that of original ring yarns. However, the deteriorated fiber stress distribution between twin pre-twisting strands in the convergent spinning triangle area still caused fiber loss and imperfections in the formation. The mechanism of offset spinning technology was reported by diagonally offsetting the yarn path to modify the spinning triangle's geometry [14]. Interestingly, the fiber tension distribution could be balanced by a geometric conformation of a right oblique spinning triangle, significantly controlling hairiness formation tendency and improving the yarn strength property as the complete transfer efficiency increased. However, the popularization and application of offset spinning technology is restricted by low production efficiency, due to the dislocation between spindles and front rollers. Recently, compact spinning approach was developed to force fiber aggregation by eliminating the twisting triangle from the front nip, which condensed the fiber motion to reduce hairiness formation [15]. Typically, compact spinning was divided into airflow compact spinning and mechanical compact spinning according to the principle of fiber agglomeration [16][17][18][19]. However, forced agglomeration of compact spinning might easily cause inadequate fiber transfer for the yarn's inner twist distribution. To overcame the shortcomings of the spinning methods described above, various contact surface apparatuses [20] were designed to control the fiber motion and spinning tension distribution with the application of multidimensional frictional forces in the yarn formation area, which is considered an easily operated technology with low energy consumption. For instance, an energy-saving apparatus with a static contact surface [21] was installed in front of the front nip to trap and re-wrap the protruded hairs onto the yarn surface by the force of friction, effectively reducing harmful hairs. However, the irregularity and deterioration of the yarn constituted a drawback of these surface-contacting approaches, resulting from the emergence of fibers accumulation caused by irregular fiber wrapping due to excessive friction. Based on these approaches, an additional self-adjustable disk [22] was located on the static contact surface to constrain fiber motion by gravity; meanwhile, the twist blockage was prevented, owing to the rotational motion of the self-adjustable disk performed with the yarn movement. Thus, the self-adjustable disk could reduce the spun yarn hairiness with the preferable yarn unevenness, while lacking improvement in the difference of fiber tension distribution in the spinning triangle area. Until now, the bottleneck phenomenon of spinning triangle geometry control [23] and fiber tension distribution regulation [24][25][26] still remains a problem to be solved. In this study, a spinning approach using rotary heterogeneous contact surfaces was proposed to control fiber stress distribution in the spinning triangle. The key mechanism and mechanical model of dynamic fiber migrations forcing stress regulation with the linear velocity gradient were theoretically investigated in yarn formation area. Then, comparative experiments with various yarns spun with or without the self-designed contact apparatus, modifying the structure and stress distribution in the spinning triangle area, were conducted. Eventually, the yarn properties, including hairiness, irregularity, and tenacity, were successively measured to verify the previous theory. The Key Factor Influencing Spinning Triangle Area in Rotary Grooved Contact Surfaces Contacting Fiber Strand According to the Z-twist ring spinning principle, the fiber strands are delivered from the front rollers to form a spinning triangle as the twists transfer into the yarn formation area [27]. Then, the yarn formation area is remodeled by two parts when the delivered fiber strands contact the rotary grooved contact surfaces: on the one part is the area between front nip and rotary grooved contact surfaces (H 1 ), and on the other part is the inner region of rotary grooved contact surfaces (H 2 ). As directly demonstrated in Figure 1a,b, the width of the spinning triangle area in traditional ring spinning when the fiber strand is delivered from the front nip, is L 1 . Subsequently, the width of the triangle construct in yarn formation area is reduced to L 2 by the applied force of the rotary grooved contact surfaces on the fiber strand. inner region of rotary grooved contact surfaces (H2). As directly demonstrated in Figure 1a,b, the width of the spinning triangle area in traditional ring spinning when the fiber strand is delivered from the front nip, is L1. Subsequently, the width of the triangle construct in yarn formation area is reduced to L2 by the applied force of the rotary grooved contact surfaces on the fiber strand. Meanwhile, the twisting torque applied on the fiber in the H1 area is greatly reduced due to the lengthened distance from the front nip to the twisting point, resulting in the decrease in the centripetal pressure on the fiber to reduce the exposed fibers on the strand surface and promoting the tight cohesion between the fibers on the remodeled twisting point. Moreover, the twist difference between entering and leaving rotary grooved contact surfaces affects the fiber strand in the yarn formation area, effectively controlling the local exposed fibers as they re-twist into the yarn body [28]. Unfortunately, the deficiencies of unbalanced tension on edge fibers are further worsened by the re-shaping of the spinning triangle with rotary grooved contact surfaces (Figure 1b), leading to a lower transfer efficiency for the edge fibers. Balancing the fiber stress distribution and enhancing the fiber transfer efficiency (Figure 1c) are the important factors in producing high-quality yarn, and, therefore, the structure of rotary grooved contact surfaces must be optimized. Geometrical Principle of Forced Fiber Tension Comparison with and without Rotary Heterogeneous Contact Surfaces The structural shape of rotary heterogeneous contact surfaces can be observed in Figure 2a. More importantly, the diameter of the grooved cylinder in rotary heterogeneous contact surfaces gradually becomes smaller from left to right; therefore, the linear velocity at any spot on rotary heterogeneous contact surfaces is different, and the speed gradually slows down from the left side to the right side. Meanwhile, the twisting torque applied on the fiber in the H 1 area is greatly reduced due to the lengthened distance from the front nip to the twisting point, resulting in the decrease in the centripetal pressure on the fiber to reduce the exposed fibers on the strand surface and promoting the tight cohesion between the fibers on the remodeled twisting point. Moreover, the twist difference between entering and leaving rotary grooved contact surfaces affects the fiber strand in the yarn formation area, effectively controlling the local exposed fibers as they re-twist into the yarn body [28]. Unfortunately, the deficiencies of unbalanced tension on edge fibers are further worsened by the re-shaping of the spinning triangle with rotary grooved contact surfaces (Figure 1b), leading to a lower transfer efficiency for the edge fibers. Balancing the fiber stress distribution and enhancing the fiber transfer efficiency (Figure 1c) are the important factors in producing high-quality yarn, and, therefore, the structure of rotary grooved contact surfaces must be optimized. Geometrical Principle of Forced Fiber Tension Comparison with and without Rotary Heterogeneous Contact Surfaces The structural shape of rotary heterogeneous contact surfaces can be observed in Figure 2a. More importantly, the diameter of the grooved cylinder in rotary heterogeneous contact surfaces gradually becomes smaller from left to right; therefore, the linear velocity at any spot on rotary heterogeneous contact surfaces is different, and the speed gradually slows down from the left side to the right side. In addition, the rotary heterogeneous contact surfaces (Figure 2b), rotated by the passing strand, change the linear velocity distribution to adjust fiber tension in the yarn formation area; thus, the rotary heterogeneous contact surfaces can provide a relatively stable and symmetrical spinning triangle zone for the fiber stress to keep balance and lower the inefficiency in fiber migration, improving the structure and properties of the spun yarn [29]. For purposes of analysis, the fiber trapping comparisons before and after contacting with rotary heterogeneous contact surfaces are illustrated in Figure 3. In addition, the rotary heterogeneous contact surfaces (Figure 2b), rotated by the passing strand, change the linear velocity distribution to adjust fiber tension in the yarn formation area; thus, the rotary heterogeneous contact surfaces can provide a relatively stable and symmetrical spinning triangle zone for the fiber stress to keep balance and lower the inefficiency in fiber migration, improving the structure and properties of the spun yarn [29]. For purposes of analysis, the fiber trapping comparisons before and after contacting with rotary heterogeneous contact surfaces are illustrated in Figure 3. As shown in Figure 3a, the traditional ring in the spinning triangle area shows an unstable and asymmetric construct; specifically, the pre-twisted force on the right side of spinning triangle is greater than that on the left side. Thus, the local fibers of original yarn in the right side of spinning triangle area will be exposed from yarn stem to form hairiness In addition, the rotary heterogeneous contact surfaces (Figure 2b), rotated by the passing strand, change the linear velocity distribution to adjust fiber tension in the yarn formation area; thus, the rotary heterogeneous contact surfaces can provide a relatively stable and symmetrical spinning triangle zone for the fiber stress to keep balance and lower the inefficiency in fiber migration, improving the structure and properties of the spun yarn [29]. For purposes of analysis, the fiber trapping comparisons before and after contacting with rotary heterogeneous contact surfaces are illustrated in Figure 3. As shown in Figure 3a, the traditional ring in the spinning triangle area shows an unstable and asymmetric construct; specifically, the pre-twisted force on the right side of spinning triangle is greater than that on the left side. Thus, the local fibers of original yarn in the right side of spinning triangle area will be exposed from yarn stem to form hairiness As shown in Figure 3a, the traditional ring in the spinning triangle area shows an unstable and asymmetric construct; specifically, the pre-twisted force on the right side of spinning triangle is greater than that on the left side. Thus, the local fibers of original yarn in the right side of spinning triangle area will be exposed from yarn stem to form hairiness and thin places (hair 1, 2, 4) due to the unbalance in fiber tension, even leading to the phenomenon of fiber loss (hair 3, 5). Furthermore, the surface fibers on the left side of the spinning triangle are difficult to transfer inside and gradually form defects, which are caused by the lack of fiber movement control due to relatively relaxed fiber tension. In comparison, when the fiber strand was applied to rotary heterogeneous contact surfaces (Figure 3b), a notable friction stress difference in the yarn formation zone between low linear velocity and high linear velocity areas of rotary heterogeneous contact surfaces was created. Interestingly, the increased fiber tension on the left side of the spinning triangle is endowed by the friction stress of the high linear velocity from a larger diameter side of the rotary heterogeneous contact surfaces, which weaken the torsion and dispersion effect to strengthen the control over fiber motion and migration. Consequently, exposed fibers will be re-twisted into the yarn body to reduce hairiness formation and improve fiber utilization. Furthermore, the internal torque stress of the spinning triangle is adjusted by friction stress gradient distribution to enhance the fiber transfer efficiency, eliminating the fiber loss and agglomerated defects brought by inadequate transfer of part-fibers. Establishment of a Mechanical Model for Fiber Motion with Rotary Heterogeneous Contact Surfaces The asymmetrical groove structure with a gradual diameter incurs a linear velocity gradient distribution of the staple strand in the reshaped yarn formation zone, and the fibers reach various levels of stress when contacting different positions on the rotary heterogeneous contact surfaces. Thus, the twist density changes for the spinning strand until it passes out of the yarn formation zone. The deformation stress on single fiber contacted by rotary heterogeneous contact surfaces in the spinning triangle area was modeled. Firstly, the staple strand is regarded as a fluid composed of continuously distributed particles, and then the physical quantities of the fluid (velocity v, pressure p, and density ρ) are seen as a function of a three-dimensional position vector and time t to describe the motion of the single fiber in the staple strand. In cylindrical coordinates (illustrated in Figure 4a), the single fiber motion near point R can be expressed as the sum of parallel motion, rotation, and pure deformation. spinning triangle is endowed by the friction stress of the high linear velocity from a larger diameter side of the rotary heterogeneous contact surfaces, which weaken the torsion and dispersion effect to strengthen the control over fiber motion and migration. Consequently, exposed fibers will be re-twisted into the yarn body to reduce hairiness formation and improve fiber utilization. Furthermore, the internal torque stress of the spinning triangle is adjusted by friction stress gradient distribution to enhance the fiber transfer efficiency, eliminating the fiber loss and agglomerated defects brought by inadequate transfer of part-fibers. Establishment of a Mechanical Model for Fiber Motion with Rotary Heterogeneous Contact Surfaces The asymmetrical groove structure with a gradual diameter incurs a linear velocity gradient distribution of the staple strand in the reshaped yarn formation zone, and the fibers reach various levels of stress when contacting different positions on the rotary heterogeneous contact surfaces. Thus, the twist density changes for the spinning strand until it passes out of the yarn formation zone. The deformation stress on single fiber contacted by rotary heterogeneous contact surfaces in the spinning triangle area was modeled. Firstly, the staple strand is regarded as a fluid composed of continuously distributed particles, and then the physical quantities of the fluid (velocity v, pressure p, and density ρ) are seen as a function of a threedimensional position vector and time t to describe the motion of the single fiber in the staple strand. In cylindrical coordinates (illustrated in Figure 4a), the single fiber motion near point R can be expressed as the sum of parallel motion, rotation, and pure deformation. The fluid stress equation at this point is: The incompressible fluid stress formula is: The fluid stress equation at this point is: The incompressible fluid stress formula is: P ij indicates the stress to which the fiber is subjected at that point. δ ij represents the Kronecker symbol. The σ ij acts as a resistance to deformation when the fiber makes a deformation motion and depends on the derivative of the velocity. Meanwhile, the fiber variations in volume and deformation near point R can be derived as I and J, respectively. The e rr , e θθ , and e zz are the elongation velocity vectors in r,θ, and Z directions, respectively, and e rθ , e rz , and e zθ represent the velocity vectors of fiber deformation, respectively. They are expressed in column coordinates as: Since the single fiber is supposed to be an incompressible fluid, the stress from velocity variation is mainly due to the shear stress caused by fiber deformation, and then the change of fluid volume is negligible. Thus, the analysis of single fiber stress characteristics σ can be expressed by studying the fiber deformation J contacting the rotary heterogeneous surfaces. Moreover, we can get Equation (5) for the single fiber contacting the rotary heterogeneous surfaces (Figure 4a) according to Taylor's theorem: If the higher order phase is neglected, the average deformation of a single fiber can be established as: Additionally, the u r , 1 r , ∂u r r∂ θ , and ∂u r ∂ r are negligible owing to the low value. Therefore, the following Equation (7) can be obtained: where u r , u θ , and u z are the components of fiber velocity in a cylindrical coordinate system, respectively. V is the fiber motion velocity, r is the fiber position vector at time t, A is the cross-sectional area of the fiber, and d is the diameter of fiber. Notably, the average fiber stress change σ in a single fiber caused by deformation at points A and B can be easily derived (Figure 4b): With consideration of the information in d A > d B , the relationships between V A and V B can be easily deduced by Equation (10): The following result (11) can be obtained by combining Equations (8) and (10) using the above parameters: Accordingly, the applied fiber stress σ A at point A is higher than that of σ B at point B in rotary heterogeneous contact surfaces, and then the twist force of Z twisting leads to the applied fiber stress on the right side of spinning triangle area. This stress is much higher than that on the left side of the spinning triangle area, thus balancing the stress in the spinning triangle and enhancing the pre-twisted force on the fiber of the left side. The previously mentioned analysis of the mathematical model shows that the stress balance of the spinning triangle reduces marginal fiber exposure and improves fiber motion control when the fiber strand passes through the rotary heterogeneous contact surfaces. Compared with the rotary grooved contact surfaces, the stress balance on the spinning triangle may enhance fiber utilization and reduce the emergence of yarn defects, owing to the fully internal and external fiber transfer. The following experiments were conducted to confirm the previously mentioned theoretical analysis. Pre-Processing of Simulation To further study the theoretical analysis of fiber stress distribution during spinning with the rotary heterogeneous contact surfaces, a simplified geometric simulation was built, as shown in Figure 5. The twin edge fibers of the spinning-triangle area were separately contacted on the bottom and top of the trapezoidal cylinder (the pressure was about 0.02 cN), and then the friction force on the edge fibers was influenced by controlling the angular velocity of the trapezoidal cylinder, which rotated along the central axis with fiber motion. Moreover, the trapezoidal cylinder and twin edge fibers were selected as, respectively, analytic rigid body and flexible bodies. The fiber diameter, modulus, and Poisson's ratio were set to 0.01 mm, 6 GPa, and 0.3, respectively. According to the twist transmission theory, the initial tension in the left fiber and right fiber was 0. Experimental Details To confirm the theoretical analysis of the beneficial effect on Z-twist spun yarn properties from rotary grooved and rotary heterogeneous contact surfaces, in a standard workshop at the Anhui Huamao Textile mill, the self-designed apparatus was installed in front of the front top rollers of a Dssp-01 ring-spinning frame (Figure 6a) to produce 19.7 tex (30 S) cotton yarns. The self-designed apparatus was made of copper material (the friction coefficient was 0.09), and the external diameters of the rotary grooved and rotary heterogeneous contact surfaces were both 6 mm. Additionally, the distance between the center of the self-designed apparatus and the front nip line was almost 10 mm, and the length of the contacting area was about 5 mm on the fiber strand. In particular, the groove diameter of rotary grooved contact surfaces (OA section) was around 5.6 mm (Figure 6b). On the contrary, the groove diameter of the left side (OB section) and right side (OC section) on rotary heterogeneous contact surfaces was about 6 mm and 4 mm, respectively (Figure 6c). As for technological parameters, 750 tex cotton roving was employed to produce 19.7 tex (30 S) Z-twist ring-spun yarn with and without the rotary grooved and rotary heterogeneous contact surfaces. All the experiments were conducted on specific spindles with the same spinning settings and conditions: spindle speed was 10,800 rpm, the whole draft ratio was 39.2 (the rear draft ratio was 1.35), the press bar spacer was 3.0 mm, the front roller speed was 12.67 m/min, the twist factor (Ntex) was 395, the fiber composition was 100% medium staple cotton fibers, the roller gauge was 44×55 mm, the ring-type was PG 1/42, and the traveler type was 6903 2#. Results of the Simulation on Fiber Stress Distribution with the Rotary Heterogeneous Contact Surfaces This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn. The simulation curves of fiber internal stress variations contacted on the bottom and top of trapezoidal cylinder were sequentially plotted as shown in Figure 7a. As for technological parameters, 750 tex cotton roving was employed to produce 19.7 tex (30 S) Z-twist ring-spun yarn with and without the rotary grooved and rotary heterogeneous contact surfaces. All the experiments were conducted on specific spindles with the same spinning settings and conditions: spindle speed was 10,800 rpm, the whole draft ratio was 39.2 (the rear draft ratio was 1.35), the press bar spacer was 3.0 mm, the front roller speed was 12.67 m/min, the twist factor (Ntex) was 395, the fiber composition was 100% medium staple cotton fibers, the roller gauge was 44 × 55 mm, the ring-type was PG 1/42, and the traveler type was 6903 2#. Results of the Simulation on Fiber Stress Distribution with the Rotary Heterogeneous Contact Surfaces This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn. The simulation curves of fiber internal stress variations contacted on the bottom and top of trapezoidal cylinder were sequentially plotted as shown in Figure 7a. Contact Surfaces This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn. The simulation curves of fiber internal stress variations contacted on the bottom and top of trapezoidal cylinder were sequentially plotted as shown in Figure 7a. Figure 7b shows the stress cloud of the right side fiber. Figure 7c shows the stress cloud of the left side fiber. The initial stress value of edge fibers in the spinning triangle area was different. It was about 25.5 MPa on left side and 191.8 MPa on right side, due to the fiber tensile difference of Z-twisting [30]. Moreover, the various changes in the trend of fiber stress with time on different contact positions was displayed. On one hand, when the edge fiber contacted on the bottom of rotational trapezoidal cylinder, the fiber stress sharply increased and then subsequently kept stable as the times increased (yellow curve). On the other hand, the near-horizontal pink curve of edge fiber contacted on the top of rotational trapezoidal cylinder indicated that the fiber stress remained stable as the times increased due to the lack of relative motion between the edge fiber and trapezoidal cylinder. As demonstrated in the simulation results, the internal stress on the left and right edge fiber was 170.92 MPa and 178.23 Mpa, respectively, when the time reached 3.5 × 10 −4 s, and the tension difference between the twin edge fiber fibers was 4.2%, caused by the differentiated friction force from the linear velocity gradient in the rotational trapezoidal cylinder. The stress clouds for the twin edge fibers at 3.5 × 10 −4 s are depicted in Figure 7d. These results confirmed Equation (11): the fiber stress distribution was balanced by the rotary heterogeneous contact surfaces in spinning the triangle area, further enhancing the fiber control. Effect on Yarn Hairiness by Spinning with the Rotary Heterogeneous Contact Surfaces Yarn structure would be directly influenced by variations of spinning triangle tension in the yarn formation zone [31]. Figure 8 shows that the surface hairs of yarns spun with rotary grooved and rotary heterogeneous contact surfaces (numerous hairs on the yarn surface) were much less than that of the original yarn. This might be because the intervention from the contact surfaces controls the fiber motion, and then the surface fibers were tightly wrapped onto the yarn body. Besides surface fibers, the helix structure of the fiber motion traced on rotary heterogeneous contact surfaces presented clearer and was more ordered, which might due to the fiber tension gradient control from varied linear velocity on rotary heterogeneous contact surfaces enhancing the full fiber transfer. spun with rotary grooved and rotary heterogeneous contact surfaces (numerous hairs on the yarn surface) were much less than that of the original yarn. This might be because the intervention from the contact surfaces controls the fiber motion, and then the surface fibers were tightly wrapped onto the yarn body. Besides surface fibers, the helix structure of the fiber motion traced on rotary heterogeneous contact surfaces presented clearer and was more ordered, which might due to the fiber tension gradient control from varied linear velocity on rotary heterogeneous contact surfaces enhancing the full fiber transfer. Figure 9b shows that especially for harmful hairiness (hairs ≥ 3 mm might easily influence fabric performance and production efficiency), the hairiness index of yarns spun with rotary grooved and rotary heterogeneous contact surfaces were reduced by 53.5% and 68.3%, respectively. These results corresponded to the previous hairiness reduction hypothesis, in which the linear velocity gradient distribution regulates the fiber motion to reduce the fiber stress difference in yarn formation area further enhances fiber transfer that could greatly decrease the amount of exposed fiber from the excessive tension on the edge of the spinning triangle, and promotes cohesion between inter-fibers. Figure 9a again demonstrates that the hairs of rotary heterogeneous contact surfaces spun yarns were fewer than in other spun yarns. Figure 9b shows that especially for harmful hairiness (hairs ≥ 3 mm might easily influence fabric performance and production efficiency), the hairiness index of yarns spun with rotary grooved and rotary heterogeneous contact surfaces were reduced by 53.5% and 68.3%, respectively. These results corresponded to the previous hairiness reduction hypothesis, in which the linear velocity gradient distribution regulates the fiber motion to reduce the fiber stress difference in yarn formation area further enhances fiber transfer that could greatly decrease the amount of exposed fiber from the excessive tension on the edge of the spinning triangle, and promotes cohesion between inter-fibers. Results of the Simulation of Fiber Stress Distribution with the Rotary Heterogeneous Contact Surfaces The comparison of CVm and blackboard unevenness on different yarns spun without and with the rotary grooved and rotary heterogeneous contact surfaces could be observed in Figure 10. The CVm value of rotary grooved contact surfaces spun yarn was fractionally worse than that of the original yarns (p = 0.64); on the contrary, the rotary heterogeneous contact surfaces spun yarns had a slightly improved unevenness in CVm values when compared with that of the original yarns (p = 0.45). Results of the Simulation of Fiber Stress Distribution with the Rotary Heterogeneous Contact Surfaces The comparison of CVm and blackboard unevenness on different yarns spun without and with the rotary grooved and rotary heterogeneous contact surfaces could be observed in Figure 10. The CVm value of rotary grooved contact surfaces spun yarn was fractionally worse than that of the original yarns (p = 0.64); on the contrary, the rotary heterogeneous contact surfaces spun yarns had a slightly improved unevenness in CVm values when compared with that of the original yarns (p = 0.45). The results of the comparison indicated that undesirable draft occurred when the fiber strand passes through the H1 section, a result of the rolling friction applied on the fiber strand when the rotary grooved contact surfaces blocked the twist transition upflowing to the yarn formation area. the rotary heterogeneous contact surfaces further improved the yarn's CVm values caused by the offset of a resistant twist from balanced fiber tension distribution in the spinning triangle area. Table 1 shows the resulting yarn imperfections (including thin places, thick places, and neps) and fiber loss rates. Yarns spun with rotary heterogeneous contact surfaces had less thin places, less thick places, lower fiber loss rates, and more neps than that of rotary grooved contact surfaces spun yarns and original yarns. This was mainly because the exposed fibers could be trapped and wrapped onto thin places as the rotary heterogeneous contact surfaces dispelled the yarn's weak sections, captured to reduce the phenomenon of fiber loss. Simultaneously, the fiber tension gradient control of rotary heterogeneous contact surfaces overcame the emergence of increased thick places and neps concentrated on the yarn surface due to the unstable fiber transfer from rotary grooved contact surfaces (+50% thick places 72.5 (/km) < 65.0 (/km); +200% Neps 65.0 (/km) < 45.0 (/km)). The results of the comparison indicated that undesirable draft occurred when the fiber strand passes through the H1 section, a result of the rolling friction applied on the fiber strand when the rotary grooved contact surfaces blocked the twist transition up-flowing to the yarn formation area. the rotary heterogeneous contact surfaces further improved the yarn's CVm values caused by the offset of a resistant twist from balanced fiber tension distribution in the spinning triangle area. Table 1 shows the resulting yarn imperfections (including thin places, thick places, and neps) and fiber loss rates. Yarns spun with rotary heterogeneous contact surfaces had less thin places, less thick places, lower fiber loss rates, and more neps than that of rotary grooved contact surfaces spun yarns and original yarns. This was mainly because the exposed fibers could be trapped and wrapped onto thin places as the rotary heterogeneous contact surfaces dispelled the yarn's weak sections, captured to reduce the phenomenon of fiber loss. Simultaneously, the fiber tension gradient control of rotary heterogeneous contact surfaces overcame the emergence of increased thick places and neps concentrated on the yarn surface due to the unstable fiber transfer from rotary grooved contact surfaces (+50% thick places 72.5 (/km) < 65.0 (/km); +200% Neps 65.0 (/km) < 45.0 (/km)). Results of the Simulation of Fiber Stress Distribution with the Rotary Heterogeneous Contact Surfaces The comparison of the breaking force and elongation among original yarns, rotary grooved contact surfaces spun yarns, and rotary heterogeneous contact surfaces spun yarns is illustrated in Table 2. In particular, the improvement of the breaking force in rotary grooved contact surfaces spun yarns and rotary heterogeneous contact surfaces spun yarns was 1.6% and 5.4%, respectively, compared with the original yarn (11.45 cN/tex < 11.64 cN/tex < 11.92 cN/tex); the re-wrapped fibers and tight yarn structure directly enhanced the fiber strength utilization to increase yarn breaking force. Moreover, the higher breaking force of rotary heterogeneous contact surfaces spun yarns explained that sufficient fiber transfer, as the fiber tension gradient control improved the cohesion and friction between fibers. [14.18] In addition, the remarkable elongation improvement in yarn spun by rotary heterogeneous contact surfaces might be due the relatively larger slip space between fibers endowed from stable fiber trajectories after controlling the tension distribution in the spinning formation zone. Conclusions In this paper, theoretical considerations were conducted to compare the spinning triangle tension control mechanisms between the rotary grooved and rotary heterogeneous contact surfaces on spun yarn properties. The comparative analysis showed that the rotary heterogeneous contact surfaces contacting the fiber strand can stabilize the fiber stress difference on the edge of the spinning triangle to control fiber motion. Moreover, a mathematical model was established to confirm that the fiber tension in the spinning triangle could be balanced by varied friction stress of the linear velocity gradient in rotary heterogeneous contact surfaces, to enhance the fiber transfer and then trap more hairs into yarn body to form a tighter surface structure. The experimental results tied well with previous aforementioned theoretics, which demonstrated that the hairiness and tensile properties of yarn spun with rotary heterogeneous contact surfaces were significantly improved compared with that of original yarn and yarn spun with rotary grooved contact surfaces. Significantly, the rotary heterogeneous contact surfaces spun yarn showed a slightly improved performance in irregularity, possibly caused by stable and balanced twist tension from fiber stress gradient control. The results of this experimental research will hopefully serve as useful feedback information for improvements in high-quality yarn production with low energy consumption. The results of the study facilitate the production of high quality yarns.
2023-01-01T16:09:39.978Z
2022-12-29T00:00:00.000
{ "year": 2022, "sha1": "80ad7700e7ae70ade877eb76a1d2478ef76dd1b4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/1/176/pdf?version=1672333146", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c866bfc689cb7afabe16ee4c543a00df7d43c744", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
255471127
pes2o/s2orc
v3-fos-license
Mapping the protein binding site of the (pro)renin receptor using in silico 3D structural analysis We have previously reported that monoclonal antibodies against the (pro)renin receptor [(P)RR] can reduce the Wnt/β-catenin-dependent development of pancreatic ductal adenocarcinoma (PDAC), the most common pancreatic cancer. Antibodies against two (P)RR regions (residues 47–60 and 200–213) located in the extracellular domain (ECD) reduced the proliferation of human PDAC cells in vitro. Although these regions probably participate in the activation of Wnt/β-catenin signaling, their functional significance remains unclear. Moreover, the (P)RR ECD is predicted to possess an intrinsically disordered region (IDR), which allows multiple protein interactions because of its conformational flexibility. In this study, we investigated the significance of the two regions and the IDR by in silico 3D structural analysis using the AlphaFold2 program and evolutionary sequence conservation profile. The model showed that ECD adopted a folded domain (residues 17–269) and had an IDR (residues 270–296). The two regions mapped onto the structural model formed a continuous surface patch comprising evolutionarily conserved hydrophobic residues. The homodimeric structure predicted by AlphaFold2 showed that full-length (P)RR comprising the ECD, single-span transmembrane, and cytoplasmic domains formed a twofold symmetric dimer via the ECD, which explains the experimentally proven homodimerization. The dimer model possessed two hand-shaped grooves with residues 47–60 and 200–213 in their palms and the IDR as their fingers. Based on these findings, we propose that the IDR-containing hydrophobic grooves act as a binding site for (P)RR and perform multiple functions, including Wnt signaling activation. Antibodies against the (pro)renin receptor residues 47–60 and 200–213 can inhibit pancreatic ductal adenocarcinoma (PDAC) cell proliferation by suppressing Wnt signaling. This study provides 3D structural insights into receptor binding and one-to-many interactions, which underpin the functional versatility of this receptor. Introduction The (pro)renin receptor [(P)RR] is a single-span transmembrane protein originally identified as a regulator of the renin-angiotensin system (RAS) required to maintain blood pressure and body fluid balance [1]. (P)RR reportedly contributes to the pathogenesis of various diseases, including fibrosis, hypertension, preeclampsia, diabetic microangiopathy, and cancer [2]. In particular, the aberrant expression of (P)RR directly leads to genomic instability in human pancreatic ductal epithelial cells and contributes to the early carcinogenesis of pancreatic ductal adenocarcinoma (PDAC) [3]. An open question regarding (P)RR functionality is its oneto-many binding; specifically, (P)RR interacts with various RAS-independent binding partners [2,4,5]. Receptor binding is based mainly on interactions between the extracellular domain (ECD) and the respective signaling protein ligands. For example, prorenin acts as a (P)RR ligand in the RASdependent pathway [1,5]. (P)RR binding is mediated by the ECD, leading to the nonproteolytic activation of prorenin [6,7] and the activation of local tissue RAS [2]. This binding not only induces intracellular tyrosine phosphorylation-dependent signaling pathways [1,2] but also leads to the downregulation of (P)RR and upregulation of phosphatidylinositol-3 kinase by binding of the transcription factor promyelocytic zinc finger protein with the (P)RR cytoplasmic domain [8]. (P)RR undergoes intracellular processing to produce three different forms: a full-length form [1], a truncated membrane-bound form [9], and a truncated soluble form lacking the transmembrane domain, termed soluble (P)RR [s(P)RR] [10]. The truncated membrane-bound form remains inside the cell and interacts with vacuolar H + -ATPase (V-ATPase) as a component of the multisubunit membrane-bound proton pump [2,9]. s(P)RR, which comprises most of the ECD, is secreted extracellularly and interacts with Frizzled-8 (FZD8) on the surface of the renal collecting duct principal cells, enhancing the urineconcentrating capability via FZD8-dependent β-catenin signaling [11,12]. The full-length (P)RR is a component of the Wnt receptor complex [13]. The (P)RR ECD is required for binding to FZD8 and low-density lipoprotein receptor-related protein 6 (LRP6) to maintain Wnt/β-catenin signaling [13]. (P) RR also functions in a RAS-independent manner as an adaptor between the Wnt receptor complex and V-ATPase, which The receptor homodimerizes to generate two protein binding sites important for interaction with Wnt signaling proteins and other protein ligands. allows the acidification of Wnt signalosome vesicles and subsequent LRP6 phosphorylation [13]. Furthermore, (P)RR interacts with partitioning defective 3 homolog (laminar formation) and pyruvate dehydrogenase subunit (energy metabolism) [2,4,5]. Thus, (P)RR could potentially perform multiple modulatory functions by interacting with various proteins, but these interacting partners do not share significant similarities in protein sequence. The silencing of (P)RR suppresses Wnt signaling activation, thus reducing the proliferative activity of human PDAC cells and the growth of engrafted tumors in nude mice [14]. Monoclonal antibodies (mAbs) against residues 200-213 located in the (P)RR ECD reduce PDAC cell proliferation in vitro as well as PDAC tumor growth in vivo by suppressing the activation of Wnt signaling [15]. Before generating the neutralizing mAbs, we examined the antiproliferative effects on PDAC cell growth of seven antipeptide polyclonal antibodies (pAbs) against the (P)RR ECD [15]. pAbs against two regions (residues 47-60 and 200-213) significantly reduced cell proliferation in a dose-dependent manner [15]. Although these regions probably participate in the activation of Wnt/β-catenin signaling, their functional significance remains unclear. We previously suggested that residues 269-292 located in the (P)RR ECD form an intrinsically disordered region (IDR) [16]. IDRs adopt various conformations under physiological conditions [17] and can utilize the same sequence region in the sequence to bind multiple partners. In addition, IDR-equipped proteins can function as hubs in protein-protein interaction networks, which are essential for cell signaling [18,19]. A reasonable expectation is that the (P)RR IDR contributes to the multiple binding and functions of this receptor. Nonetheless, the functionality of the (P)RR IDR has not yet been reported. The 3D structures of mammalian V-ATPase have been determined by cryo-electron microscopy (cryo-EM) and show that the truncated membrane-bound form of (P)RR binds to the inside of the V-ATPase c-ring [20,21]. Although mass spectrometry analysis has detected some full-length (P)RR in protein preparation [20], the (P)RR ECD is missing in these cryo-EM structures, and the truncated membrane-bound form alone is visible [20,21]. To date, the 3D structure of the (P)RR ECD has not been experimentally determined. The full-length (P)RR structure has been predicted using a threading-based program [22]. Recently, significant progress in protein 3D structure prediction has been made using AlphaFold2 (AF2) [23] and RoseTTAFold [24], in which a protein 3D structural model is generated using machine learning algorithms with amino acid sequences as the only input. Both programs can predict protein structures with near-experimental accuracy [23,24]. Notably, the AF2 program has predicted protein 3D structures on the human proteome scale [25], and a structural model of human (P)RR is available in the AlphaFold Protein Structure Database [26]. Therefore, it would be intriguing to utilize a 3D structural model to provide a novel perspective on (P)RR functions. In this study, we analyzed the 3D structure of (P)RR in silico using the AF2 program and evolutionary sequence conservation profile, and we investigated the functional significance of the two regions involved in the PDAC antiproliferative effect and (P) RR IDR. Data source The Protein Data Bank (PDB) coordinate file of human (P) RR was downloaded from the AlphaFold Protein Structure Database [26] with accession ID AF-O75787-F1. Predicted local-distance difference test (pLDDT) scores for each residue were stored in the B-factor field of the downloaded coordinate file. The average pLDDT score of the structural model was calculated using the WHAT IF web server [27]. The positions of the domain boundaries were referenced according to the UniProt database [28]. The ProSA Z score was obtained using a web server [29]. The Ramachandran plot is available from PDBsum [30]. Structure prediction of human (P)RR with RoseTTAFold A structural model of human (P)RR (residues 1-350) was predicted using the RoseTTAFold Google Colab Notebook [24,31] with the mmseqs2 multiple sequence alignment (MSA) method without templates. The pLDDT scores for each residue were stored in the B-factor field of the resulting coordinate file. PyMOL [32] was used to superimpose residues 17-269 of the resulting RoseTTAFold model on the AF2 model and to calculate the root mean square deviation (RMSD) values between the models. Structural similarity analysis of human (P)RR The Dali server [33] was used to identify proteins structurally similar to human (P)RR. The human (P)RR AF2 model (accession ID: AF-O75787-F1) was used as the query protein structure against the PDB25 database [33]. The resulting Dali output coordinate files were superimposed on the human (P)RR structure and visualized using PyMOL. Evolutionary conservation analysis of human (P)RR The ConSurf web server [34] was used to estimate evolutionary sequence conservation, and the human (P)RR AF2 model (accession ID: AF-O75787-F1) was submitted to the server (https://consurf.tau.ac.il/). Homologous sequences were identified using the HMMER algorithm against the UniRef-90 protein database. The Bayesian calculation method was used to calculate position-specific conservation scores. The resulting ConSurf scores assigned to each (P) RR amino acid residue were stored in the B-factor field of the PDB file. Homologous sequences were aligned using Clustal Omega [35]. The alignment figure was generated using ESPript [36]. GraphPad Prism 9.3 (GraphPad Software, La Jolla, CA, USA) was used to calculate the Consurf scores of the (P)RR residues. Analysis of homodimerization of human (P)RR AlphaFold2_advanced Notebook [31] was used to obtain a structural model for the two chains of the (P)RR ECD (residues 17-270) or two chains of full-length (P)RR (residues 17-350). The model building parameters were as follows: MSA_method, mmseqs2; homooligomer, 2; use_templates, false; default for other parameters. The resulting PDB coordinate files, 3D structural models colored with the pLDDT score, and predicted aligned error (PAE) plots were used to examine homodimer formation. The ProSA Z score was obtained using a web server [29]. A Ramachandran plot was generated using PDBsum [30]. The solvent accessibility and dimerization interface were analyzed using the PDBePISA server [37]. Molecular graphics analysis Molecular images were produced using PyMOL [32]. To map the conservation profile onto the 3D structural model, a PDB file with the ConSurf scores was used to produce a surface representation. The electrostatic potential was calculated using the APBS algorithm in the PyMOL plugin, and the surface representation was colored according to the normalized consensus hydrophobicity scale [38] to identify hydrophobic surface patches. Quality of the predicted human (P)RR model The AF2 structural model of human (P)RR is shown in Fig. 1A. The model quality was assessed using the ProSA Z score, Ramachandran plot, and pLDDT score. The ProSA Z score is indicative of the overall protein structure quality and can be used to check whether the input structure is within the range of scores typically found for native proteins of similar size [29]. The structure model had a ProSA Z score of −5.83, and the model was plotted in the region of the structures obtained by X-ray crystallography (Supplementary Fig. 1A). The Ramachandran plot showed that 88.5% of the residues were situated in the most favored regions, and the remaining 11.5% were in the additional allowed regions (Supplementary Fig. 1B). The pLDDT score is a per-residue prediction confidence metric expressed on a scale from 0 to 100 using the AF2 program [23]. The average pLDDT score of the human (P)RR AF2 model was 81.3, indicating that the model was predicted confidently (pLDDT > 70). Both the folded the ECD (residues 17-269) and the transmembrane domain (TM)-containing region (residues 297-332) were predicted with high confidence (Fig. 1A). In contrast, the signal peptide, residues 270-296, and cytoplasmic domain were modeled with low confidence (Fig. 1A). The PAE plot of human (P)RR indicated high confidence in the relative position and orientation of the domains at residues 1-16, 17-269, and 290-350 (Fig. 1B). Moreover, the plot suggested an interdomain movement between the two domains involving the intervening residues 270-289. Residues 269-292 were predicted to form an IDR using sequence-based disorder prediction [16]. Approximately 30% of human protein residues are predicted with low confidence (pLDDT < 70) using AF2; [25,39] lowconfidence residues are proposed to encompass both IDRs and regions that are structured upon complex formation [25]. Hence, residues 270-296, which had low pLDDT scores, probably formed an IDR. To validate the model reliability, a structural model of human (P)RR (residues 1-350) was predicted using the RoseTTAFold program [24,31] (Fig. 1C), and the average pLDDT score was 0.598 on a scale of 0-1.0. The TMcontaining region and the folded domain of the ECD were modeled with high confidence, whereas the signal peptide, residues 269-303, and the cytosolic region were modeled with low confidence (Fig. 1C). In the cryo-EM structure [20], the truncated membrane-bound (P)RR (residues 292-343) consists of a long α-helix and a short α-helical turn connected by an extended linker. The conformations of these (P)RR residues modeled by AF2 (Fig. 1A) and RoseTTAFold (Fig. 1C) were similar to those in the cryo-EM structure [20]. The RoseTTAFold ECD and AF2 ECD models were well superimposed, with a root mean square deviation (RMSD) value of 1.3 Å for their Cα atoms (Fig. 1D). Although both prediction programs provided an essentially similar structural model of human (P)RR ECD, we used AF2 prediction because modeling monomers and multimeric protein complexes is feasible using the userfriendly AF2 prediction platform [31]. (P)RR and alkaline phosphatase (ALP) structural similarities The folded domain of the human (P)RR ECD (residues 17-269) consists of a seven-stranded mixed β-sheet flanked on both sides by eight α-helices ( Fig. 2A). Structural comparisons using the Dali server [33] are illustrated with overlaid squares in red, cyan, and yellow, respectively. C Cartoon representation of the RoseTTAFold structural model. The structure is colored according to the pLDDT: blue for the most confidently predicted regions (pLDDT ≥ 0.9) and red for those predicted with very low confidence (pLDDT ≤ 0.5) based on a color spectrum of red, yellow, green, cyan, and blue. D Secondary structurebased superimposition of RoseTTAFold ECD (green) on AF2 ECD (blue). Cα atoms of residues 17 and 269 of the RoseTTAFold model are shown as yellow and orange spheres, respectively. The right view is the same representation rotated by 90°M apping the protein binding site of the (pro)renin receptor using in silico 3D structural analysis PhoK were superimposed with an RMSD value of 3.7 Å for the Cα atoms of 192 aligned residues (Fig. 2B). Notably, the (P)RR ECD was structurally similar to ALP, phosphodiesterase, and sulfuric ester hydrolase. The active site of PhoK coordinates two catalytic zinc ions with Asp and His residues, allowing one phosphate ion to bind to the metal ions (Fig. 2C), which is a common active site architecture of ALP [40]. Because (P)RR does not contain the conserved Asp and His residues required to coordinate the catalytic divalent metal(s) (Fig. 2D), (P)RR appears to be a pseudo-ALP. Mapping of (P)RR regions involved in PDAC antiproliferative effects We previously reported that pAbs against two regions (residues 47-60 and 200-213) significantly reduced the proliferation of human PDAC cells in vitro [15]. These regions are likely to play a role in Wnt/β-catenin signaling activation. It is anticipated that functionally important residues are evolutionarily conserved and clustered together to form functional patches in a 3D structure [41,42]. The functional significance of these two regions in the PDAC antiproliferative effect can be explored by mapping both the residue positions and the evolutionary sequence conservation profile simultaneously on the 3D structure of (P)RR. First, we used the ConSurf server [34,43] to obtain the conservation profile of (P)RR. Using all human (P)RR residues as the input protein sequence, the homolog search algorithm in ConSurf identified 114 nonredundant homologous sequences. The MSA of human (P)RR and eight sequence homologs is presented in Fig. 3. A high level of sequence identity (69-92%) was found among vertebrates, whereas a low sequence identity (35%) was observed among invertebrates distant from humans. Next, we examined the spatial distribution of residues 47-60 and 200-213 using the (P)RR AF2 model. Interestingly, these two regions formed a solvent-accessible continuous patch on the ECD surface (Fig. 4A). When the conservation profile was mapped to the structural model, the surface area around the two regions, especially the 47-60 residue region, consisted of evolutionarily conserved residues (Fig. 4B). Furthermore, the surface area around residues 47-60 had weak electrostatic potential (Fig. 4C) and was hydrophobic (Fig. 4D). Structural basis for the homodimerization of (P)RR Nguyen et al. [1] reported the possibility of the homodimerization of (P)RR. Schefe et al. UniRef90_C3YJH1), and Stichopus japonicus (sea cucumber; Uni-Ref90_A0A2G8L3G6). Identity (%) represents the percentage sequence identity to human (P)RR. Identical residues are shown as white characters with a red background and similar residues with red characters with a white background. The secondary structures of the human (P)RR AF2 model are shown at the top: α, α-helix; β, β-strand; η, coil. The processing site of site-1 protease (S1P) is depicted as a black triangle [44] and our group [45] have experimentally demonstrated that (P)RR forms a homodimer, and dimerization occurs via the ECD [45]. The folded domain of the (P)RR ECD constitutes a single β-sheet in the core of the protein flanked by α-helices (Fig. 4E). The domain adopts a semicircular shape with a convex face and a flat face. Interestingly, mapping the conservation profile onto the 3D structural model revealed that the convex shape comprised variable residues (left, Fig. 4F). Simultaneously, the flat face consisted of the conserved residues (right, Fig. 4F). One β-strand (β5) was located on this flat face (Fig. 4E). The participation of an edge strand in homodimerization has been reported previously [46][47][48]. Thus, we anticipated that the edge βstrands present on the conserved flat surface might align in an antiparallel orientation, forming a unified β-sheet that adopts a homodimeric structure. To examine (P)RR homodimerization in silico, AF2 prediction was performed to evaluate the dimerization capability of the two IDR-truncated ECD chains (residues 17-270). The predicted models adopted a homodimeric structure with a high average pLDDT score and low interchain PAE value ( Supplementary Fig. 2A). The two chains of ECD assembled to form a twofold symmetric dimer (back-to-back) stabilized through intermolecular interactions between the conserved flat surfaces (left, Fig. 5A). The dimer was composed of one β-sheet united by an edge βstrand from each chain (right, Fig. 5A). Next, we predicted the homodimeric structure of the fulllength human (P)RR (residues 17-350). The intrachain PAE plots indicated that all the models comprised two independent domains in one chain and that the model with the highest pLDDT score exhibited a defined interchain relative position and orientation (Supplementary Fig. 2B). Two chains of fulllength human (P)RR formed a back-to-back homodimer via ECD (Fig. 5B). The ProSA Z score of −6.2 for the structural model ( Supplementary Fig. 2C) indicated that the model was situated in the region of structures obtained by X-ray crystallography. The Ramachandran plot (Supplementary Fig. 2D) showed that most residues in the folded domain of the ECD occupied the most favored regions. The validation analyses Each amino acid residue is colored according to its ConSurf conservation score. The color-coding bar shows the ConSurf coloring scheme, which varies from green (highly variable; score 1) to purple (highly conserved; score 9). C Electrostatic surface potential colored from negative (−5 kT/e, red) to positive (+5 kT/e, blue). based on the ProSA Z score and Ramachandran plot indicated that our (P)RR structural model was acceptable for use for further analysis in this study. The twofold symmetric dimer of (P)RR had two surface regions, comprising residues 47-60 and 200-213 of each (P)RR ( Fig. 5C and Supplementary Movie 1), that were not A Two chains of the ECD (residues 17-270) are depicted as a gray transparent surface with a cartoon representation. Each chain is colored on the rainbow scale from the N-terminus (blue) to the C-terminus (red). B Two chains of the full-length human (P)RR (residues 17-350) in a cartoon representation. Each chain is colored in rainbow format, except yellow, which indicates TM regions. C Surface representation of the full-length human (P)RR homodimer with the two chains colored gray and pale cyan. Residues 47-60, 200-213, and 281 are shown in red, blue, and green, respectively. D Evolutionary conservation profile of human (P)RR evaluated in terms of local structural environment. The Consurf scores are shown as dots, which are colorcoded according to the local structural environment: solvent accessible, blue; solvent inaccessible, red; dimer interface, green. E Regionwise averaged Consurf scores of human (P)RR. The region "others" contains all ECD residues except 47-60, 105-153, 200-213, and 270-296. F (P)RR homodimer viewed perpendicular to the twofold axis. One chain is drawn in blue and the other in cyan. Residues 105-153 are colored red. The dimer interface residues are represented by stick models (green). G A close-up of solvent-accessible highly conserved residues (green) in two regions: residues 47-60 (red) and 200-213 (blue) buried upon homodimerization (Fig. 5C). Notably, the IDR (residues 270-296) of one chain protruded over residues 47-60 of the other chain (Fig. 5C), forming a flexible flap over the region. Conservation profile evaluated with respect to the local structural environment We first identified solvent-accessible/inaccessible residues and dimer interface residues using the PISA server [37]. A surface area equivalent to 4.5% (1017 Å 2 ) of one chain was buried upon homodimerization. Next, per-residue conservation scores were examined based on solvent accessibility and involvement in the dimer interface (Fig. 5D). Notably, very few residues with scores < 5 were observed in the 105-153 region, and approximately half the residues in this region were located at the dimer interface (Fig. 5D). The average conservation score of 6.45 in the 105-153 region was higher than that of the full-length protein (Fig. 5E). When residues 105-153 were mapped to the predicted homodimeric structural model, these conserved residues were found to contribute directly to homodimer formation (Fig. 5F). Further inspection of the (P)RR structural model revealed the presence of a groove formed by a right-hand-shaped structure with three distinct areas: palm, thumb, and fingers (Fig. 6A). The palm comprised the 47-60 and 200-213 regions. The fingers corresponded to the IDR-based flexible flap mentioned above, and the groove had space to accommodate a single extended loop ( Supplementary Fig. 3 and Supplementary Movie 2); the grooves were largely hydrophobic (Fig. 6B). The conservation profile indicated that the palm, thumb, and finger areas consisted of highly conserved residues (Fig. 6C), which implied the functional importance of the "(P)RR hand." Based on the common features of the IDR, we anticipated two possibilities for the interaction. In the first, the "(P)RR hand" catches the sticky or relatively hydrophobic Fig. 6 Mapping the protein binding sites of (P)RR. A Surface representation of the ECD of the full-length human (P)RR homodimer. Palm, residues 47-60 (blue) and 200-213 (red); thumb, residues 65-70 (red dotted); and fingers, residues 270-296 (blue botted). The model is shown with the same color coding as in Fig. 5C, except the IDR is shown in pink (residues 270-296). Surface representations with hydrophobicity score (B) and conservation profile (C) are shown in the same orientation as in (A). D Proposed "catch and tether" mechanism of (P)RR. (P)RR catches a loop with its "hand" and tethers two proteins. E AF2 structural models of FZD8, LRP6, and (P)RR depicted on the same scale for comparison. CRD cysteine-rich domain, E1-E4 four β-propeller/epidermal growth factor repeats, L1-L3 three low-density lipoprotein receptor type A repeats. The LRP6 structure downstream from L3 is shown in cartoon representation loop ( Supplementary Fig. 3 and Supplementary Movie 2) as a result of the conformational flexibility of the IDR-based fingers. In the second, (P)RR IDR directly catches its partner via conformational plasticity during binding. Thus, it is conceivable that the hand-shaped architecture allows (P)RR to catch two binding partners and tether them closely in space to facilitate protein-protein interactions (Fig. 6D). The hetero-oligomer consisting of two (P)RR and one or two binding partners can be formed adjacent to the cell membrane (Fig. 6D). Discussion A previous in silico analysis by Sanchez-Guerrero et al. [22] generated a 3D structural model of full-length (P)RR and reported the probable binding site residues for two prorenin regions, which have since been shown to be important for (P)RR binding [49]. The constructed structure is monomeric [22]. Here, we revealed new structural features of the binding site of (P)RR. First, the structural basis for homodimerization was determined. Second, the dimeric 3D model demonstrated that two regions involved in the PDAC antiproliferative effect and (P)RR IDR formed two evolutionarily conserved grooves, which probably act as a protein binding site to exert multiple functions, including Wnt signaling activation. This is the first report of (P)RR homodimerization and its multiple functionalities based on 3D structural information. It is notable that L48, M50, G51, F52, and S59 are the protein binding site residues commonly identified in both the previous study [22] and our study (Fig. 5G), although the assembly state and overall 3D structure are not the same in both studies. Our in silico analysis demonstrated that full-length human (P)RR formed a back-to-back homodimer via the ECD (Fig. 5B). The conservation profile obtained independently of AF2 indicated that residues 105-153 were highly conserved (Fig. 5D, E). The direct contribution of these residues to homodimer formation (Fig. 5F) illustrates the structural importance of these conserved residues and is consistent with the experimentally proven homodimerization [8,44,45]. Heterodimer formation between full-length (P)RR and s(P)RR [45] can be achieved through intermolecular interactions between the two ECD surfaces (Fig. 5F). Upon binding to Wnt ligands, the Frizzled and LRP6 receptors are bridged and assembled into multiprotein complexes termed Wnt signalosomes [50][51][52]. (P)RR interacts with FZD8 and LRP6 in the extracellular space in a Wnt-independent manner [13]. FZD8 is composed of a cysteine-rich domain (CRD), a flexible and largely unstructured CRD-to-TM linker, and a seven-span TM domain [51] (Fig. 6E). The ectodomain of LRP6 is composed of four β-propeller/epidermal growth factor repeats (E1-E4), three low-density lipoprotein receptor type A repeats (L1-L3), and a short linker [52] (Fig. 6E). Considering the spatial proximity of these proteins to the membrane (Fig. 6E), (P)RR probably interacts with FZD8 ECD (CRD and/or linker) and a part of the LRP6 ectodomain (L1-L3 and/or linker) via the catch and tether mechanism. This tethering likely facilitates Wnt signalosome formation and would be advantageous for effectively responding to Wnt ligands. pAbs against the 47-60 and 200-213 regions [15] probably interfere with signalosome assembly at the protein-binding interface. When an interaction partner is supposed to bind with the "(P)RR hand," its anticipated interaction primarily relies on hydrophobic interaction, which has relatively low stereospecificity and is tolerant to multiple binding orientations. Alternatively, an interaction partner may directly bind to the (P)RR IDR. This binding mode may explain the known one-to-many binding and the multiple functions of (P)RR. Deletion analysis showed that the (P)RR ECD is required for the biogenesis of active V-ATPase [53]. (P)RR binds (directly or indirectly) to the V-ATPase subunits ATP6V0C and ATP6V0D1, where binding of the ATP6V0C subunit is achieved via (P)RR ECD [13]. To facilitate active V-ATPase biogenesis, the multiple subunits consisting of V-ATPase assembly may be caught and tethered by the "(P) RR hand." Thus, (P)RR probably functions as an extracellular scaffold protein in Wnt/β-catenin signaling, a V-ATPase assembly factor [54], and a hub in protein-protein interaction networks in various signaling pathways. Abbas et al. [20]. reported that some full-length (P)RRs are present in cryo-EM protein preparations. The cryo-EM structure of a minor population of V-ATPase or Vo complexes that contain intact (P)RR may clarify how the multiple subunits are assembled into V-ATPase with the help of full-length (P)RR. s(P)RR is considered a useful biomarker for diseases and a biologically active paracrine factor [2]. Site-1 protease (S1P) is responsible for s(P)RR generation, and L281 is the C-terminal residue after S1P cleavage [55]. Because s(P)RR probably retains the hand-shaped architecture (Fig. 6A), it may confer catch and tether functionality. The role of (P)RR in blood pressure regulation and the development of hypertension has been unveiled based on (P)RR levels in the brain and kidney [2]. (P)RR protein levels are elevated in the subfornical organ of the brain of hypertensive humans [56], and brain (P)RR can regulate blood pressure by altering prohypertensive and antihypertensive pathways through local angiotensin-IIdependent and angiotensin-II-independent mechanisms [2]. Adrenal gland (P)RR is anticipated to contribute to adrenal aldosterone synthesis and the pathogenesis of hypertension [2,57], although the precise mechanism leading to the enhancement of aldosterone production remains unknown. In this study, we proposed that the IDRcontaining hydrophobic grooves act as a protein binding site of (P)RR and allow multiple protein interactions. Therefore, this study may provide clues to help find new binding proteins for (P)RR and clarify the (P)RR-mediated molecular mechanisms leading to hypertension. In conclusion, our in silico structural analysis mapped the binding site of (P)RR. This study provides the first 3D structural insight into receptor binding and one-to-many interactions, underpinning the functional versatility of this receptor. Our findings will increase the understanding of disease pathogenesis and explore novel modalities to treat human diseases, including hypertension and cancer. Further analyses are required to experimentally characterize IDRbased protein interactions between (P)RR and its interaction partners.
2023-01-06T21:59:32.853Z
2022-12-09T00:00:00.000
{ "year": 2022, "sha1": "4c6122df41a1c923c8ec5e3a012a84c3f4f6e6de", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41440-022-01094-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c23db27d98d2a71da69a34c37be4a71a4da18abe", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
30905625
pes2o/s2orc
v3-fos-license
Recent Chikungunya Virus Infection in 2 Travelers Returning from Mogadishu, Somalia, to Italy, 2016 2 positions owing to mixed bases. The chromatogram showed equal intensities at both positions C and T (Figure, http://wwwnc.cdc.gov/EID/article/22/11/16-1171-F1. htm). This finding indicates that quasispecies might be present. These positions in the human sequence coincided perfectly with 2 of the 3 positions found to vary in the sequence obtained from oyster sample D. All 3 base variations were a replacement of a C with a T (Figure), which further supports the presence of quasispecies. However, in this setup, it was not possible to prove the origin of these closely related species. Whole-genome sequencing using next-generation sequencing would be a way to prove the simultaneous presence of all quasispecies in relevant samples. Since the emerging of GII.P17-GII.17 in Asia, sporadic cases have been reported worldwide (3,9). In this study, we established a direct molecular link between a common food source and a series of acute gastroenteritis outbreaks. Even though these represent European outbreaks, our results show that oysters act as vehicles for the rapid spread of emerging noroviruses to distant geographic areas. Furthermore, we document that quasispecies of GII.P17-GII.17 might occur simultaneously in a host. This finding should be considered in future molecularepidemiologic outbreak investigations. To the Editor: Since chikungunya virus (CHIKV) was first isolated in 1952 (in Tanzania), outbreaks have occurred every 7-20 years in countries in Africa and Asia, and since 2013, it has been identified in the Americas (1,2). However, no cases have been reported from the Horn of Africa (3,4). We confirmed CHIKV infection acquired in 2016 by 2 travelers to Somalia who returned to Italy. In June 2016, a Somali woman (patient 1) was referred to the Infectious and Tropical Diseases Unit, Careggi University Hospital, in Florence, Italy, because of severe diffuse bilateral arthralgia and edema in hands, wrists, ankles, and feet. Five days earlier, she had returned to Italy from Mogadishu, Somalia, where she had spent 45 days visiting relatives. The woman had been living in Italy since the 1990s and returned to Somalia each year for ≈2 months; she denied travel to other countries. She reported that symptoms started abruptly in May, 17 days after arriving in Somalia (28 days before returning to Italy). At symptom onset, arthralgia was associated with fever and skin rash, which lasted a few days. In early July 2016, another Somali woman (patient 2) with bilateral arthralgia in her hands, wrists, ankles, and feet associated with foot edema sought medical care at the same hospital 7 days after returning from a 65-day trip to Mogadishu, where she visited relatives. The woman had been living in Italy ≈20 years; the only other travel she reported was to Kenya in 2012. Her symptoms started in June, 20 days after arriving in Somalia (45 days before returning to Italy). At symptom onset, she also had skin rash and fever, which lasted a few days. Both patients reported that, during the same period, some of their relatives in Mogadishu had similar symptoms and were clinically diagnosed as having chikungunya fever by local doctors. Both also reported that, during the same period, other cases had been reported in Mogadishu by mass and social media and, thus, the local population was aware of the disease. Serum samples for patients 1 and 2 were positive for CHIKV antibodies (Table). Both patients were treated with nonsteroidal antiinflammatory drugs and corticosteroids and are receiving follow-up. According to the US Centers for Disease Control and Prevention, as of April 22, 2016, CHIKV had not been reported from Somalia (4), and no evidence exists for CHIKV circulation in that area of the Horn of Africa (3). In addition, on August 3, 2016, we performed a literature search in PubMed, Embase, and ProMED-mail, and found no reports of CHIKV in Somalia. Poorly documented preliminary data on the presence of CHIKV in Somalia were recently reported in 2 documents by the United Nations Office for the Coordination of Humanitarian Affairs. One 1997-1998 and 2006-2007 (6,7), and a dengue outbreak occurred during 1992-1993 (8). The current outbreak in Somalia could have been triggered by several factors, including circulation of CHIKV in neighboring Kenya (references 8,9 in online Technical Appendix) and heavy rains that led to flooding in southern and central Somalia beginning in January 2016 (reference 1 in online Technical Appendix). CHIKV has the potential to provoke explosive outbreaks in naive populations (9), so the current outbreak may greatly affect the economy and public health in Somalia. Systematic studies to understand the magnitude of the ongoing epidemic are needed. In the meantime, local public health stakeholders in Somalia and healthcare workers worldwide caring for travelers returning from Somalia should be aware that CHIKV is circulating in the country. This report confirms the importance of travel medicine services in performing early diagnosis of imported arboviral diseases, not only to thwart secondary transmission during periods of competent vector activity but also to help to detect or confirm virus circulation in previously unaffected countries.
2018-04-03T01:45:57.740Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "bfccf4d60de6522d4924666a72f1c3c8e6af05e5", "oa_license": "CCBY", "oa_url": "https://wwwnc.cdc.gov/eid/article/22/11/pdfs/16-1225.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9d69f5ead73f8538da2d7af77674c20fbedb0348", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
140168838
pes2o/s2orc
v3-fos-license
Carbon Kinetic Isotope Effect in the Oxidation of Methane by the Hydroxyl Radical The reaction of the hydroxyl radical (HO) with the stable cafix)n isotopes of methane has been studied as a function of temperature from 273 to 353 K. The measured ratio of the rate coefficients for reaction with (cid:127)ZCHn relative to (cid:127)3CH(cid:127) (kn(cid:127)/kn3) was 1.0054 (20.0009 at the 95% confidence interval), independent of temperature within the precision of the measurement, over the range studied. The precision of the present value is much improved over that of previous studies, and this result provides important constraints on the current understanding of the cycling of methane through the atmosphere through the use of carbon isotope measurements. steady concentration of 2 to 6 10 cm '3 from the rate of conversion of methane. formaldehyde, subsequently photolyzes or hydroxyl. The extent of conversion of methane and nitrous oxide DA3.01 Fourier transform spectrometer system interfaced the cell. An infrared beam from a heated SiC rod was collimated and White-type internal multiple pass optics total optical 48.6 The the was by a liquid N:-cooled detector. beamsplitter and detector combination of linear relationship between measured and species concentration. The initial spectnnn (100 interferometer scans) was taken before samples were withdrawn for GC and MS analysis. The Beer-Lambert linearity for methane verified for 10 Methane (CH•) is an important trace gas in the atmosphere . It is a key sink for the tropospheric hydroxyl radical. Methane contributes to greenhouse warming [Donner and Ramanathan, 1980]; its potential warming effects follow only CO 2 and H20. Methane is a primary sink for chlorine atoms in the stratosphere and a major source of water vapor in the upper stratosphere. The concentration of CI-I4 in the troposphere has been increasing at a rate of approximately 1% per year, at least over the past decade [Rasmussen and Khalil, 1981;Blake et al., 1982;Steele et al., 1987;Blake and Rowland, 1988]. Ice core measurements indicate a rapid increase began a few hundred years ago [Craig and Chou, 1982;Rasmussen and Khalil, 1984]. The reasons for this increase have not been established, but it has been suggested that it could be due to an increase in source emissions, a decrease in the atmospheric loss rate, or both. Several approaches have been applied in order to understand the sottrees and sinks of atmospheric methane (see discussion by Cicerone and Oremland [ 1988]). One way to study the methane budget is through the use of stable isotopes of carbon as proposed by Stevens and Rust [1982]. Measurements of in methane in remote background air and in methane sources have been used with data of fluxes from these sources to estimate relative source strengths and to provide input to models of atmospheric methane [e.g., Stevens CHn + HO ---) CH3 + HzO The rate coefficient for this reaction has been studied extensively (see review by Ravishankara [1988]), but data indicating the effect of isotope substitution in methane are scarce. Fractionation occurs in the atmosphere because the rate coefficient for reaction (R1) is slightly larger for the •2Hn relative to •3CHn. This secondary (i.e., involving isotopic substitution at a position other than the direct reaction center and involving an atom not split off in the reaction) kinetic isotope effect is expected to be small, and indeed previous studies near room temperature have found an effect of 1% or less [Rust and Stevens, 1980;Davidson et al., 1987]. We attempted to improve the precision of the value for the carbon kinetic isotope effect in reaction (R1) (i.e., the ratio of the rate coefficients for the reaction of •H n with hydroxyl as compared to the reaction of •3CHn) near room temperature. Additionally, we investigated the temperature dependence of this rate coefficient ratio. The kinetic isotope effect in reaction (R 1) has been studied in this laboratory previously . Many of the experimental details reported in that study apply here. Differences from our earlier study will be pointed out. The experiment involved the reaction of a mixture of methane containing both stable carbon isotopes, nominally at relative atmospheric abundances, with the hydroxyl radical. We measured the methane concentration and isotopic composition before and after a reaction period. The relation between the amount of methane converted, the change in the ratio of stable carbon composition and the ratio of rate coefficients for reaction (R1) for the two species is given by (see derivation in Davidson et al. [1987]): ln(A) -__.k,, ln(A) + ln{($,+ 1000)/($ o+ 1000)} where/q2 is the rate coefficient for reaction (R1) for methane containing carbon-12 and k•3 is for carbon-13 methane; A is the fraction of methane remaining at time t; and $ is a measure of the ratio of carbon-13 to carbon-12 (in per rail, or parts per thousand difference), at the start of an experiment (•Jo) and the end (•J0, def'med as follows: x 1000 (2) where R, and R,• stand for the ratio of carbon-13 to carbon-12 in a sample, x, and in a standard, respectively. In this case, the standard is PeeDee Belemnite, which is the commonly accepted reference for stable carbon isotope work. The choice of standard has no effect on the final kinetic isotope effect derived. In The accuracy of the measured kinetic isotope effect is sensitive to errors in determining the extent of conversion (A) and the isotope ratios ($). This sensitivity is minimized at relatively large fractional conversions, because of the increased amount of isotopic fractionation which occurs. Hydroxyl radicals were produced in sufficient amount to remove 50-90% of the methane in 24 hours. This design is relatively insensitive to contaminants found in the commercially prepared methane (such as light allcanes) because only the unreacted methane is analyzed (a problem with ethane is possible because it may be cryogenically trapped with methane in the isotope analysis, discussed below). We used the same hydroxyl radical source as our previous study . Ozone was photolyzed in the Hartley band to produce excited oxygen atoms which react with water vapor.. The cell was illuminated through a quartz window with radiation from a 300-W Xenon arc (ILC Corporation) filtered through a Coming glass •ter 7-54. The [fiter was cooled by circulating tap water through the filter mount. This arrangement minimized possible effects of heating the reaction mixture by the photolysis lamp. The filter inhibited photolysis of ozone in the Chappius bands which produces ground state oxygen atoms. Ground state oxygen does not react with water vapor to produce hydroxyl radical. The reaction mixture was continuously stirred throughout the course of a run with a stainless steel bellows pump. The volume of the pump and connecting robing was minimized; less than 0.05% of the reaction mixture was outside of the cell at any given time. The entire cell mixture was circulated every 5 to 10 min. The cell was typically circulated for 10 rain before the first sample was extracted, and for 10 rain after the photolysis lamp was turned off before the final sample was taken. The following procedure was used to prepare a reaction mixture. A volume of liquid water (0.2 to 0.8 cm •) was added through a septurn to an evacuated, temperature regulated stainless steel cell (approximately 48 L volume), which has been describexi previously [Shetter et al., 1987]. Oxygen (Linde UHP, greater than 99.99% purity) was added to the cell in order to insure that methyl radicals (CH3), onc.9 formed, would be converted to stable products (CO or CO o rather than be converted back to methane, o,r be associated:to form ethane. Back reaction to methane and ethane could provide alternate pathways for isotope fractionation. The methoxy radicals formed react with oxygen to form formaldehyde, which subsequently photolyzes or reacts with hydroxyl. The extent of conversion of methane and nitrous oxide was monitored by infrared spectroscopy. A BOMEM model DA3.01 Fourier transform spectrometer system was interfaced to the cell. The relative standard deviation of the IR measurements (3-5%) was larger than that for the gas chromatographic determinations. The methane remaining as determined from spectroscopic measurements (FTIR) as compared to GC measurements is shown in Figure 1. The least squares fit of the data yields a slope of 0.98 (+ 0.06), which is not significantly different from unity. An infrared beam from a heated The isotope ratio determinations were performed in a fashion similar to that discussed by Tyler [1986;Tyler et al., 1988] and Davidson et al. [1987], using a new high-volume, fast-flow rate combustion train to convert CI-I4 to CO:. A major improvement over the earlier system is in the use of an oven catalyst of platinized alumina (1% loading) at 800øC. Carbon dioxide and water are removed prior to the entry of the sample gas stream into the the oven using a series of four multiple loop traps at liquid nitrogen temperature (77 K). Since Pt-alumina is porous and potentially subject to high blanks, it is conditioned periodically by flushing with dry zero air. Our carbon blank is less that 1 I. tliter of CO: for 300 L of clean zero air processed. The recovery of carbon dioxide produced from the conversion of methane is greater than 99%, even with relatively small samples such as those for this experiment (20 to 50 I. tL). The mass spectrometer used in this study was a Finnigan-Mat Delta-E model, which resulted in overall precision of better than 0.1%o by the use of a specially designed cold finger inlet system. The working standard for this study was Oztech-002 (Oztech Gas Co.) which is--30.011%o relative to PDB carbonate. The values are reported with respect to PDB carbonate, with usual corrections for background, leakage, capillary fractionation, and •70. The separation and combustion procedure would not distinguish carbon originating in ethane from that in methane, therefore a run was chosen for analysis of light allcanes. All alkanes including ethane were below detection limits, and could constitute no more that 0.1% of the methane concentration, and thus present no interference in the isotope ratio analysis. Temperature and Conversion Dependence of km/kts The results of this study are analyzed in several ways. First a rate coefficient ratio (k•!k•) was calculated for each experiment from equation (1). These data are presented in Figure 2 and Table 1. The average and 95% confidence intervals for the room temperature recommendations for/q•/k•3 of Davidson et al. [1987] and Rust and Stevens [1980] Comparison with Previous Measurements Our new result is about one-half the calculated fractionation of methane carbon isotopes as determined by Davidson et al. [ 1987]. The present result has an uncertainty which is nearly an order of magnitude smaller than Davidson et al. [1987]. Possible reasons for this difference will be discussed below. The present result is in agreement with the results of Davidson et al. [1987] and Rust and Stevens [1980] within the large uncertainties associated with those studies (•! = 1.010 ñ 0.007 and 1.003 ñ 0.007 at the 95% confidence level; note the confidence interval for Rust and Stevens is calculated from the data, not as reported by them). It has been pointed out that the individual •! values from the Davidson et al. study fall into two groups [Craig et al., 1988;Stevens and Engelkemeir, 1988]. At least two factors may have contributed to the large spread of values from the study of Davidson et al. [1987] as compared to the present one. The precision of the extent of methane conversion was greatly improved in the present study (less than 1%) as compared to the previous study (about 9%). Also, in this study isotope ratio measurements were performed on the methane at the beginning of every run. In the previous experiment, seven determinations were performed and averaged. While one would expect the isotope ratio of the unreacted methane to be easily characterized and constant, some variability was found in the present study (average initial per value equals -35.9 + 0.5%o). The amount of methane measured in the initial samples was always slightly higher ( Modeling the Carbon Kinetic Isotope Effect Although Ehhalt et al. [1989] found that primary kinetic isotope effects in simple systems (H 2 and HD with HO) can be calculated quite successfully using the BEBOVIB-IV computer program [Burton et al., 1977], the use of this program for the carbon kinetic isotope effect in the methane-hydroxyl reaction gave a range of values (1.00 to 1.04). These values depend upon the shape assumed for the potential energy surface of the activated complex. A more sophisticated method apparently is required to perform accurate simulations on a secondary kinetic isotope effect such as occurs in this reaction. Implications for the Atmospheric Methane Budget What does the value for the ratio of the two rate constants tell us about the 13CH4]1:lCH4 ratio in the sources of atmospheric methane? We performed a simplified calculation following Craig et al. [1988], with an extension to include possible soil sink fractionation as discussed by King et al. [ 1989]. CONCLUSIONS We performed a study of the temperature dependence of the stable carbon kinetic isotope effect in the oxidation of methane by hydroxyl. The average effect measured in our study is/q•/k•s = 1.0054 (+ 0.0009, 95% confidence interval). This value is larger than that of Rust and Stevens [1980] and lower than that of Davidson et al. [1987], but with much better precision. The ratio is independent of temperature from 273 to 353 K, within the precision of the results. This new value may be used to constrain the atmospheric methane budget using isotope ratio studies, and suggests more measurements of methane sources and sinks are necessary in order to fully understand atmospheric methane.
2019-04-24T13:11:45.810Z
1990-12-20T00:00:00.000
{ "year": 1990, "sha1": "e25564eaa9704c3f04e39fe9ca48d8bec887ba2c", "oa_license": "CCBY", "oa_url": "https://escholarship.org/content/qt3xq450bq/qt3xq450bq.pdf?t=q4kkqu", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "f0b9da10f287a05a6477bcc596a876a9fbbae627", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Geology" ] }
118987222
pes2o/s2orc
v3-fos-license
Quantum periods and prepotential in N=2$$ \mathcal{N}=2 $$ SU(2) SQCD We study ${\cal N}=2$ SU(2) supersymmetric QCD with massive hypermultiplets deformed in the Nekrasov-Shatashvili limit of the Omega-background. The prepotential of the low-energy effective theory is determined by the WKB solution of the quantum Seiberg-Witten curve. We calculate the deformed Seiberg-Witten periods around the massless monoplole point explicitly up to the fourth order in the deformation parameter. Introduction The Seiberg-Witten (SW) solution [1,2] of the prepotential of N = 2 supersymmetric gauge theory enables us to understand both weak and strong coupling physics of the theory such as instanton effects, the duality of the BPS spectrum [1,2] and nonlocal superconformal fixed point [3,4]. In the weak coupling region, the Nekrasov partition function [5,6], where the gauge theory is defined in the Ω-background [7], provides an exact formula of the prepotential including the nonperturbative instanton effects. The Nekrasov partition function can be computed with the help of the localization technique. At strong coupling region, however, we do not know the localization method to reproduce the prepotential around the massless monopole point. The Nekrasov function is related to the conformal block of two dimensional conformal field theory [8,9] and also the partition function of topological string theory [10]. The analysis of the conformal block with insertion of the surface operator [11,12,13] leads to the concept of the quantum Seiberg-Witten curve. The solution of the quantum curve gives the low-energy effective theory of the Ω-deformed theories, which are parametrized by two deformation parameters ǫ 1 and ǫ 2 . In the Nekrasov-Shatashvili limit [14] of the Ω-background, where one of the deformation parameters ǫ 2 is set to be zero, the quantum curve becomes the ordinary differential equation. The quantum SW curve is obtained from the quantization procedure of the symplectic structure defined by the SW differential [15] where the parameter ǫ 1 plays a role of the Planck constant . In particular, the SW curve for SU(2) Yang-Mills theory becomes the Schrödinger equation with the sine-Gordon potential and the higer order corrections to the deformed period integrals in the weak coupling have been calculated by using the WKB analysis [16]. This was generalized to N = 2 SU(N) SQCD [17]. Note that the SW curve for N = 2 * SU(2) gauge theory corresponds to the Lamé equation and the deformed period integrals also have been calculated by using the WKB analysis [18,19]. One can derive the Bohr-Sommerfeld quantization conditions which are nothing but the Baxter's T-Q relations of the integrable system [17,20,21]. The deformed period integral agrees with that obtained from the Nekrasov partition function. It is interesting to study perturbative and non-pertubative quantum corrections in the strong coupling region of the moduli space, which might change the strong coupling dynamics of the theory. In [22], the perturbative corrections around the massless monopole point in the N = 2 SU(2) super Yang-Mills theory have been studied. In [23], the 1instanton correction in to the dual prepotential has been calculated. In [24,25,26,27], the non-perturbative aspects of the expansion in N = 2 theories have been studied. The purpose of this work is to study systematically perturbative corrections in to the prepotential at strong coupling where the BPS monopole becomes massless for N = 2 SU(2) SQCD with N f = 1, 2, 3, 4 hypermultiplets. We investigate quantum corrections to the period integrals of the SW differential and the prepotential up to the fourth order in the deformation parameter . This paper is organized as follows: In Section 2, we review the quantization of the SW curve and the quantum periods for N = 2 SU(2) SQCD. In Section 3, we show that the quantum correction can be expressed by acting the differential operator on the undeformed SW periods in detail. In Section 4, we calculate the quantum periods in the weak coupling region for N = 2 SU(2) SQCD and confirm that they agree with those obtained from the Nekrasov partition function. In Section 5, we study the expansions of the periods around the massless monopole point in the moduli space. We consider how the effective coupling and the massless monopole point are deformed by . In Section 6, we add some comments and discussions. 2 Quantum SW curve for N = 2 SU(2) SQCD The Seiberg-Witten curve for N = 2 SU(2) gauge theory with N f (= 0, . . . , 4) hypermultiplets is given by with Λ N f being a QCD scale parameter for N f ≤ 3 andΛ = √ q for N f = 4. Here q = e 2πiτ U V and τ U V denotes the UV coupling constant [28,8]. K(p) and K ± (p) are defined by (p + m j ), (2.3) where u is the Coulomb moduli parameter and m 1 , . . . , m N f are mass parameters. N + is a fixed integer satisfying 1 ≤ N + ≤ N f . The curve (2.1) can be written into the standard form [29] Let α and β be a pair of canonical one-cycles on the curve. The SW periods are defined where p(x) is a solution of (2.1). Then the prepotential F (a) is determined by (2.7) The SW differential defines a symplectic form dλ SW = dp∧dx on the (p, x) space. The quantum SW curve is obtained by regarding the coordinate p as the differential operator −i d dx . We have the differential equations where ∂ x = ∂ ∂x . Here we take the ordering prescription of the differential operators as in [17]. This differential equation is also obtained by observing the relation between the quantum integrable models and the SW theory in the Nekrasov-Shatashvili (NS) limit of the Ω-background [16]. This same differential equation is also obtained from the insertion of the degenerate primary field corresponding to the surface operator in the two-dimensional conformal field theory [11,12,13]. In this paper, we will choose N + such that the differential equation becomes the second order differential equation of the form: Then we convert this equation into the Schrödinger type equation by introducing Ψ(x) = (2.11) The quantum SW periods are defined by the WKB solution of the equation (2.10): where P (y) = ∞ n=0 n p n (y) (2.13) and p 0 (y) = p(y). Substituting the expansion (2.13) into (2.10), we have the recursion relations for p n (x)'s. Note that p n (x) for odd n becomes a total derivative and only p 2n (x) contributes the period integral. The first three p 2n 's are given by 14) up to total derivatives. Then the quantum period integral Π = P (x)dx = (a, a D ) along the cycles α and β can be expanded in as where Π (2n) := p 2n (x)dx. Now we study the equations satisfied by the quantum SW periods. It has been shown that the undeformed (or classical) SW periods Π (0) obey the third order differential equation with respect to the moduli parameter u called the Picard-Fuchs equation [30,31,32,33,34,35]. Note that ∂ u p 0 is the holomorphic diffrential on the curve. When we write the curve (2.4) in the form where the weak coupling limit corresponds to e 2 → e 3 and e 1 → e 4 , we can evaluate the periods ∂ u Π (0) = ∂ u p 0 dx = dp y (2.19) by the hypergeometric function. Then by using quadratic and cubic transformations [36,35], one finds that in the weak coupling region, where u is large, the classical periods ∂ u a (0) and ∂ u a (0) D are given by where z = − 27∆ 4D 3 and the weak coupling region corresponds to z = 0. Here ∆ and D for the curve (2.18) are defined by (e 2 i e j e k + e i e 2 j e k + e i e j e 2 k ). (2.23) ∆ is the discriminant of the curve. F (α, β; γ; z) and F * (α, β; γ; z) are the hypergeometric functions defined by (2.24) Changing the variable from z to u, the hypergeometric differential equation for F 1 12 , 5 12 ; 1; z leads to the Picard-Fuchs equation for ∂Π (0) ∂u . It takes the form where p 1 and p 2 are given by with α = 1 12 , β = 5 12 and γ = 1. For the SW curve (2.1) with N f ≤ 3, the Picard-Fuchs equations (2.25) agree with those in [33,34]. Note that for massless case, the Picard-Fuchs equation turns out to be the second order differential equation for Π (0) [32]. The higher order correction Π (k) to the SW period Π (0) is determined by acting a differential operatorÔ k on Π (0) [10,20,22,37]: (2.28) There are various ways to represent the differential operatorÔ k . For example, one can use the first and second order differential operators with respect to u to express Π (k) as Let us study the simplest example, the N f = 0 theory. We have the quantum SW curve (2.10) with the sine-Gordon potential: The SW periods Π (0) satisfy the Picard-Fuchs equation [30]: The discriminant ∆ and D are given by The second and fourth order quantum corrections are given by [10,16,22] With the help of the Picard-Fuchs equation (2.31), we find a simpler formula for Π (4) : In the weak coupling region where u ≫ Λ 2 0 , substituting (2.32) into (2.20) and (2.21), we can obtain a (0) and a (2.36) up to the fourth order in . It has been checked that the quantum curve reproduces the prepotential obtained from the NS limit of the Nekrasov partition function [16,22]. We can also consider the quantum SW periods in the strong coupling region. For example, at u = ±Λ 2 0 where massless monopole/dyon becomes massless, by solving the Picard-Fuchs equation in terms of hypergeometric function, we can compute the SW periods [31]. For the computation of the deformed SW periods, it is convenient to use (2.35) rather than (2.34) since the coefficients in (2.34) become singular at u = Λ 2 0 . We then find the expansion of the SW periods around u = Λ 2 0 , which are given by [22] a D (ũ) =i In the following sections, we will generalize these results and compute the quantum corrections to the SW periods at strong coupling region for the N f = 1, 2, 3, 4 cases. Quantum periods for N f ≥ 1 Let us study the quantum SW periods for SU (2) theory with N f ≥ 1 hypermultiplets. We will choose N + of (2.3) such that the differential equation (2.8) become the second order differential equation. Then we convert the quantum SW curve into the Schrödinger type equation (2.10). The quantum SW periods are given by the integral of (2.15) and (2.16). These periods can be represented asÔ k Π (0) with some differential operatorsÔ k . We will find the second and fourth order corrections to the SW periods. In the following, ∆ N f stands for ∆ and D N f for D in (2.22) and (2.23) for the N f theory. In the theory with N f = 1 hypermultiplet, we can take N + = 1 in the SW curve (2.1) without loss of generality. The quantum curve is written as the Schrödinger type equation with the Tzitzéica-Bullough-Dodd type potential: where Q 2 (x) = 0. The SW periods Π (0) satisfy the Picard-Fuchs equation (2.25) with It is also found to satisfy the differential equation with respect to the mass parameter m: We will calculate the corrections of the second and fourth orders in [37] to the period integrals using (2.15) and (2.16). These corrections are expressed in term of the basis where the coefficients in (3.5) are given by and the coefficients in (3.6) are given by (3.9) We will compare the quantum prepotential with the NS limit of the Nekrasov partition function in the weak coupling region in the next section. The above representation of the period integrals is suitable to consider the decoupling limit to the pure SU(2) theory, which is defined by m 1 → ∞ and Λ 1 → 0 with m 1 Λ 3 1 = Λ 4 0 being fixed. In the decoupling limit, the second and fourth order corrections (3.5) and (3.6) agree with (2.33) and (2.34). In section 5, we will study the deformed period integrals in the strong coupling region, where the monopole/dyon becomes massless. In this case, the discriminant ∆ 1 of the curve has a zero of the first order where the coefficients in (3.5) and (3.6) become singular. Since the SW periods Π (0) satisfy the Picard-Fuchs equation (2.25) and the differential equation (3.3), the differential operatorÔ k in (2.28) for the higher order corrections is defined modulo such differential operators. We note that the coefficients of the differential operator for Π (2) can be rewritten as Using the Picard-Fuchs equation (2.25) and the differential equation (3.3), we find that the second order correction to the SW periods can be expressed as In the similar way, we find that the fourth order correction to the SW periods is expressed as Since all the coefficients are now regular when ∆ 1 = 0, we can easily calculate the quantum SW periods at the various strong coupling points in the Coulomb branch. where for the N + = 2 case Q(x) includes the 2 term. Although the quantum curves look quite different, they are shown to give the same period integrals. One reason is that the SW periods in both cases satisfy the same Picard-Fuchs equation with the discriminant ∆ 2 and D 2 : 15) and the differential equations where Since the SW periods are uniquely determined from the Picard-Fuchs equation with perturbative behaviors around singularities, the SW periods do not depend on the choice of N + . We can also check by explicit calculation that the second and fourth order corrections are given by 2 ) 2 , · · · c (2) 2 are given in (3.18). In the decoupling limit where m 2 → ∞ and Λ 2 → 0 with m 2 Λ 2 2 = Λ 3 1 being fixed, we have the SW periods of the N f = 1 theory. Furthermore, it can be checked that the second and fourth order corrections to the SW periods become those of the N f = 1 theory. In the case of N f = 3, we can choose N + = 1 or 2 in (2.8). Otherwise, we obtain the third order differential equation. We will take N + = 2 without loss of generality. The quantum curve is the Schrödinger type equation (2.10) with The SW periods satisfy the Picard-Fuchs equation and the differential equations with respect to the mass parameter m i (i = 1, 2, 3) and the moduli parameter u. Since these equations are rather complicated, we will write down them for the theory with the same mass m := m 1 = m 2 = m 3 . In this case the discriminant ∆ 3 and D 3 become We can also confirm that the SW periods satisfy the differential equation: We can also calculate the Picard-Fuchs equation for general mass case based on ∆ 3 and D 3 . In this case we can check that the quantum corrections to the SW periods Π (0) are expressed as (3.28) The coefficients are not singular when ∆ 3 = 0. With help of the Picard-Fuchs equation and the differential equation with respect to the mass parameters, we can rewrite the quantum SW periods (3.27) and (3.28) in term of a basis ∂ u Π (0) and ∂ 2 u Π (0) . For the equal mass case, we find that In this expression, however, the coefficients become singular at the point where ∆ 3 = 0. But this representation is useful to discuss the decoupling limit to the N f = 0 theory. In the decoupling limit; m → ∞ and Λ 3 → 0 with m 3 Λ 3 = Λ 4 0 being fixed, the SW periods for N f = 3 theory agree with those for the N f = 0 theory. Moreover, we can show that the second and fourth order corrections to the quantum SW periods become those of the N f = 0 theory in this limit. We can also consider the massless limit, where the Picard-Fuchs equation becomes a simple form: Note that the coefficients X 1 k and X 2 k in (3.32) and (3.34) become singular in the massless limit m → 0. In the massless case, it is found that (3.32) and (3.34) are replaced by where these formulas include the derivative with respect to q in addition to the uderivatives. In the following sections, we will compute the quantum SW periods both in the weak and strong coupling regions and compute the deformed (dual) prepotentials. Deformed periods in the weak coupling region In this section, for the completeness, we will discuss the expansion of the quantum SW periods in the weak coupling region and compute the deformed prepotential for the N f theories [37,38]. Then we compare the prepotential with the NS limit of the Nekrasov partition function [17]. Note that the deformed prepotentials for N f = 1, 2, 4 are obtained from the classical limit of the conformal blocks of two dimensional conformal field theories [39,40,41]. The SW periods (2.6) around u = ∞ have been given by (2.20) and (2.21) [35]. The quantum SW periods can be obtained by acting the differential operators on the SW periods a (0) and a N f ≤ 3 In the case of N f = 1, the discriminant ∆ 1 and D 1 is given by (3.2). Expanding a (0) (u) and a (0) D (u) around u = ∞ and substituting them into (3.11) and (3.12), we obtain the expansions around u = ∞. They are found to be (4.2) Solving u in terms of a in (4.1) and substituting it into a D , a D becomes a function of a. Then integrating it over a, we obtain the deformed prepotential: where the first few coefficients of F (a, ) of the prepotential is given by where F 1 s is defined as [33] In a similar way, we can calculate the deformed prepotentials for N f = 2 and 3 theories, which are expanded as where some coefficients F (2k,n) N f (k = 0, 1, 2) are given in appendix A. The perturbative parts are given by where (4.9) These deformed prepotentials are shown to be consistent with the decoupling limits. We now compare the prepotentials for N f = 1, 2, 3 theories with the NS limit of the Nekrasov partition functions. By rescaling the parameters , m i (i = 1, 2, 3), and Λ N f as 2πiF (a, ) → F (a, ǫ 1 ), and then shifting the mass parameters : m i → m i + ǫ/2 for a fundamental matter or m i → ǫ/2 − m i for an anti-fundamental matter, we find that the prepotential agrees with that obtained from the Nekrasov partition [5]. N f = 4 In the case of N f = 4, after rescaling of the y and x by a factor of 1 − q 2 in the SW curve, we can apply the formulas (2.20) and (2.21). Expanding around q = 0 and integrating over u, we have the SW periods a (0) and a where the discriminant ∆ 4 and D 4 are given in (3.31). The deformed prepotential is where the perturbative part is given by where (4.12) The first several coefficients F (2k,n) 4 for k = 0, 1, 2 are given in appendix A.3. By rescaling the parameters , m and q as we find that (4.10) agrees with the prepotential obtained from the NS limit of the Nekrasov partition function of the theory with the equal mass, where the mass parameter must be shifted as m i → m i + ǫ/2 for a fundamental matter or m i → ǫ/2 − m i for an antifundamental matter (i = 1, · · · 4). For the massless case m = 0, the Picard-Fuchs equation (3.36) has a solution of the form: where Then, using (3.37) and (3.38), the second and fourth order corrections to the SW periods can be written as ∂f (q) ∂q . Deformed effective coupling constant Then taking the u-derivative of the quantum SW period Π = ∞ k=0 2k Π (2k) , we have The deformed effective coupling is defined by The leading correction to the classical coupling constant Therefore the leading correction to the effective coupling constant is determined by a dimensionless constant Y 1 We will evaluate the coefficient Y 1 2 for some simple cases, where all hypermultiplets have the same mass m. For N f = 0, from the coefficients X 1 2 and X 2 2 in (2.33) and . (4.25) In a similar way we can compute the coefficient Y 1 2 for N f ≥ 1. The results are the followings: For N f = 1, we have 2 ). (4.28) For N f = 3, we have where b 3 and c 3 is given by (3.26). For N f = 4, we find (4.30) We have confirmed that the above formulas are consistent with the decoupling limit and the deformed periods agree with those obtained from the NS limit of the Nekrasov partition function explicitly up to the fourth order in . Deformed periods around the massless monopole point In this section, we consider the quantum SW periods in the strong coupling region of the theories with N f = 1, 2, 3 hypermultiplets, where a BPS monopole/dyon becomes massless. In particular we will consider the point in the u-plane such that the deformed BPS monopole becomes massless a D (u) = 0. The dual SW period a where a constant of integration for a D (0) = 0 and a (0) (ũ) is given up to constant which is independent ofũ. The integer l is defined as the smallest integer which gives nonzero r n i.e. r n = 0 (n < l) and r l = 0. B n and A n are expressed in terms of r n and s n . First three terms of B n and A n are given by where f (n) = (1/12) n (5/12) n n! , g (n) = (1/12) n (5/12) n (n!) 2 n−1 r=0 1 1/12 + r + 1 5/12 + r − 2 1 + r . (5.8) The higher order corrections inũ can be calculated in a similar way. Once the SW periods around the massless monopole point are obtained, the quantum SW periods can be calculated by applying the differential operators as is in the weak coupling region. Thus what we have to do is to obtain the explicit value of u 0 , which is one of the zero of ∆, and the series expansion of z and (−D) 1/4 around u 0 . However, for general mass parameters, the expression of u 0 is slightly complicated. Therefor we only give explicit expression of the quantum SW periods in simpler cases; massless hypermultiplets and massive hypermultiplets with the same mass. Before going to these examples, we will discuss an interesting phenomena due to the quantum corrections. Although the undeformed SW period a 0 and B (4) 0 are observed to be non-zero by explicit calculation. We then find the massless monopole point U 0 of the deformed theory is expressed as where u 1 and u 2 are determined by We will compute these corrections explicitly in the following examples. Massless hypermultiplets We discuss the case where mass of the hypermutitplets is zero. This case gives a simple and interesting example since the moduli space admits some discrete symmetry. We will consider the massless monopole point in the moduli space. The solution of the Picard-Fuchs equation around the massless monopole point u 0 has been studied in [32]. For the N f = 1 theory, the massless monopole point is u 0 = −3Λ 2 1 /2 8/3 . Around u 0 the z and (−D 1 ) −1/4 is expanded as from which we can read off the coefficients r n and s n in the expansions (5.3). Substituting these coefficients into (5.4) and (5.5), we can obtain the SW periods (a (0) (u), a (0) D (u)). Then, using the relations (3.11) and (3.12), we obtain the expansion of the quantum SW periods aroundũ = 0: (5.16) Inverting the series of a D in terms ofũ, we obtainũ as a function of a D . Substitutingũ into a and integrating a with respect to a D , we obtain the dual prepotential: where the first several coefficients F (5.25) We then obtain the deformed dual prepotentials for the N f = 2 and 3 theories, which are given by listed in the table 3 and the table 4. The dual prepotentials include the classical term and one loop term as (4.4), (4.7) and (4.8) in the weak coupling region. These terms also appear in the pure SU(2) theory [22]. Table 3: The coefficients of the dual prepotential for the N f = 2 theory, wherec(2) = −i2 −5/2 [32]. (5.28) In next subsection, we will discuss the expansion around the massless monopole point u 0 for the theory with massive hypermultipltes with the same mass. Massive hypermultiplets with the same mass We consider the case that all the hypermultiplets have the same mass m := m 1 = · · · = m N f . The classical massless monopole point u 0 corresponds a solution of the discriminant ∆ N f = 0. In the u-plane, it is found as follows; In the decoupling limit m → ∞ and = Λ 4 0 being fixed, these points become the massless monopole point Λ 2 0 of the N f = 0 theory. If we consider the massless limit, these points become the massless monopole points for the massless N f theory. (5.36) From these expansions, we find that the monopole massless point U 0 is given by (5.10) where For N f = 2, we find that the massless monopole point U 0 is found to be (5.10) where In the case of |m| ≪ Λ 2 , we have in (5.10). Note that the first terms in the expansions of u 1 and u 2 correspond to those in the massless limit. We can perform a similar calculation of U 0 up to the fourth order in for general m. We find that the massless monopole point is shifted by the -correction. In Fig. 1 Conclusions and Discussion In this paper, we have studied the low-energy effective theory of N = 2 supersymmetric SU(2) gauge theory with N f hypermultiplets in the NS limit of the Ω-background. The deformation of the periods of the SW differential is described by the quantum spectral curve, which is the ordinary differential equation and can be solved by the WKB method. The quantum spectral curve and the Picard-Fuchs equations for the SW periods provide an efficient tool to solve the series expansion with respect to the Coloumb moduli parameter and the deformation parameter . We have found a simple formula to represent the second and fourth order corrections to the SW periods which are obtained by applying some differential operators acting on the SW periods. In the weak coupling region we solved the differential equations up to the fourth order in . We have explicitly checked that the quantum SW periods gives the same prepotential as that obtained from the NS limit of the Nekrasov partition function . We then studied the quantum corrections expansion around the monopole massless point. By solving the Picard-Fuchs equations for the SW periods, we have quantum corrections to the dual SW period a D . We then found that the monopole massless points in the u-plane are shifted by the quantum corrections. It is interesting to explore the higher order corrections and how the structure of the moduli space is deformed by the quantum corrections. It is also interesting to study the expansion around the Argyres-Douglas point [3,4,43,44] in the u-plane where the mutually non-local BPS states are massless. A generalization to the theories with general gauge group and various hypermultiplets is also interesting. A.1 N f = 2 For the N f = 2 theory the first four coefficients of the classical part of the prepotential in (4.6) are A.2 N f = 3 For N f = 3 the coefficients of the prepotential in the expansion (4.6) are given by For the N f = 4 theory the coefficients of the prepotential (4.10) are given by for the fourth order in .
2019-03-01T06:35:48.000Z
2017-05-25T00:00:00.000
{ "year": 2017, "sha1": "1e69eb0b51d95b3e9332551125031d541a1b1489", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/jhep08(2017)065", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "1e69eb0b51d95b3e9332551125031d541a1b1489", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237325195
pes2o/s2orc
v3-fos-license
Cultural Considerations for the Adaptation of a Diabetes Self-Management Education Program in Cotonou, Benin: Lessons Learned from a Qualitative Study Background: Type 2 diabetes (T2D) poses a disproportionate burden on Benin, West Africa. However, no diabetes intervention has yet been developed for Benin’s contexts. This study aimed to explore specific cultural beliefs, attitudes, behaviors, and environmental factors to help adapt a diabetes self-management program to patients with T2D from Cotonou, in southern Benin. Methods: Qualitative data were collected through focus group discussions (FDGs) involving 32 patients with T2D, 16 academic partners, and 12 community partners. The FDGs were audio-recorded, transcribed verbatim from French to English, and then analyzed thematically with MAXQDA 2020. Results: Healthy food was challenging to obtain due to costs, seasonality, and distance from markets. Other issues discussed were fruits and vegetables as commodities for the poor, perceptions and stigmas surrounding the disease, and the financial burden of medical equipment and treatment. Information about local food selections and recipes as well as social support, particularly for physical activity, were identified, among other needs. When adapting the curriculum, gender dynamics and spirituality were suggested. Conclusions: The study demonstrates the need for culturally sensitive interventions and a motivation-based approach to health (spiritual and emotional support). It also lays the groundwork for addressing T2D contextually in Benin and similar sub-Saharan African countries. Introduction Type 2 diabetes (T2D) is now a global epidemic, with 15.9 million adults affected in sub-Saharan Africa (SSA), causing an annual economic cost of USD 3.3 billion. Furthermore, it is estimated that the burden of T2D on the African continent will increase by approximately 156% by 2045 [1]. Benin, a sub-Saharan country, is no exception as the prevalence of diabetes doubled between 2008 (4.6%) and 2015 (8.4%). In some regions, the diabetes prevalence reached 21.6%, with 15.1% in urban areas [2]. Moreover, the burden of T2D is projected to rise in Benin due to accelerated urbanization and a new lifestyle [3]. Indeed, diabetes ranks among the 10 most prevalent health conditions that cause disability, with a 55.8% rise in diabetes-related disabilities between 2007 and 2017 [4]. Unfortunately, most diabetes cases in Benin are in secondary health facilities where diabetes educators are scarce. Therefore, medical officers often assume these roles, while nurses also act as diabetes educators [5]. Consequently, T2D imposes a tremendous burden on individuals, their families, and the Atlantic Ocean and Lake Nokoué. In Cotonou, French is the official language, and the population is estimated at 1.2 million [25]. Although the population grew by 12% from 2015 to 2018, most of the increase was among adults 45 years of age and older, which is significant since this age group is at a higher risk of T2D [25,26]. While the poverty rate for Benin was 49.5% (below USD 1.90 a day) in 2015, 98% of the population in Cotonou ranked in the last two quintiles of economic well-being [27]. Literacy rates for people between 15 and 49 years are at 67% (women) and 88.3% (men) [28]. Besides being Benin's largest city, Cotonou is home to most of the government's offices and diplomatic missions as well as small, private industries producing palm oil, beverages, and seafood processed at small scales. Cotonou is a city that has been rapidly changing, influencing lifestyles including diets with increased fats and sugar, reduced physical activity, and increased smoking, alcohol consumption, and stress [29]. As a result, between 2008 and 2015, the prevalence of diabetes has risen from 4.4% to 19% in Cotonou city [2]. Intervention Components The MSD intervention consists of two-hour participatory workshop-style sessions delivered within a support group structure over 13 consecutive weeks and is described in detail in a previous publication [20]. During the weekly sessions, participants received educational information and participated in empowerment-building discussions and interactive workshop activities to promote long-term behavioral change related to disease complications, diet, and increased physical activity. Sessions maintained a basic structure throughout the 13 intervention weeks: blood pressure and glucose monitoring; readings, discussions, and games related to each week's topic; execution of a custom-designed physical activity routine; and follow-up exercise to meet nutrition or physical activity goal. One or more health professionals (e.g., nurses, community health workers, doctors, or clinic staff, including interns) delivered each session during a regularly scheduled, face-to-face group meeting. The MSD program consists of seven components: (1) healthy eating (eat less fat, Healthy Plate to create a healthier diet); (2) being active (physical activity (30 min a day) adaptation); (3) monitoring blood sugar levels; (4) taking medication appropriately on a regular basis; (5) problem solving to maintain blood sugar levels in the targeted range; (6) reducing risks for diabetes-related complications (heart attack, stroke, kidney and nerve damage, vision loss); and (7) healthy-coping with stress. These components align with the Diabetes Self-Management Education and Support (DSMES) standards, as outlined in the 2016 American Diabetes Association (ADA) Standards of Medical Care in Diabetes [30,31]. Study Design This qualitative study was conducted as part of a formative research effort to assess the contextual factors that might influence Benin's MSD intervention delivery and inform necessary changes to the implementation strategy. In the present study, we conducted focus group discussions (FGDs) with T2D patients (>18 years of age), as well as community and academic partners to better understand the contextual factors influencing the implementation of the seven MSD components. The FGDs questions were derived from the literature and informed by the MSD curriculum (Supplementary Material 1, Questionnaire). In addition, the FGDs were conducted using a guide containing open-ended questions to allow the facilitators to ask questions to accommodate the discussion flow. Finally, the same questions were posed to all FDGs to allow for qualitative comparison among groups. Participant Sampling In order to participate in the study, participants had to live in Cotonou for at least six months each year. Their recruitment followed a previous quantitative survey, which collected data on the healthcare system's capacity, socio-demographic and clinical characteristics, diabetes quality of life, and self-care behaviors among 300 T2D patients (age > 18 years) in four relevant secondary hospitals [32]. A purposive sampling strategy was used to recruit patients with T2D from the 300 patients who participated in the survey, with the aim of achieving demographic diversity in terms of sex, age, marital status, and education. Medical profiles of 300 patients with T2D were assessed for inclusion criteria, i.e., being diagnosed with T2D for more than a year, being 18 years and above, and willing to give informed consent to participate in the study. Exclusion criteria were patients with extreme disease conditions that would restrict them from participating in the study. After being assured of their congruousness with the inclusion criteria, 40 participants meeting the preliminary inclusion criteria were randomly selected and invited to participate in the study. For the health system's capacity assessment, key informants were identified in collaboration with the Ministry of Health. During the process, community and academic partners were purposively selected for participation in this study to include partners of different cadres and experiences. Study objectives were explained to the study participants, and those who agreed to participate gave their informed consent. A saturation of information was reached after three patient (9-11 participants per group), two community partner (6 participants per group), and two academic partner (8 participants per group) FGDs. In total, 12 community partners (10 community health workers (CHWs) and 2 traditional healers) and 16 academic partners (4 endocrinologists, 4 general physicians, 4 pharmacists, and 4 nutritionists) were identified and invited to participate. Of the 40 T2D patients formally invited by phone, 32 (80%) accepted the invitation. The FDGs were recorded with the consent of the interviewees and lasted roughly 2-2.5 h. Training, Piloting, and Data Collectors Two experienced facilitators and the lead author conducted the focus groups. Facilitators were secondary school teachers with bachelor's degrees in education. The training involved a daylong instructional session and covered modules, including qualitative data collection methods and research ethics. The instruments used to collect data were tested in two ways. First, two CHWs and four Beninese nutrition experts reviewed the guide to ensure that the FGD questions were relevant, context specific, and culturally appropriate. Additionally, three pilot FGDs were conducted with four T2D patients, four CHWs, and four physicians to test the flow of questions among the target populations. Feedback was incorporated into the final version of the guides used for data collection. For example, feedback was given regarding the cultural relevance and translation accuracy of the questionnaire. Most respondents interpreted the questions differently than what was intended in the original questionnaire. Consequently, the questions were reworded. Our pretest trials also revealed the length of time needed to conduct the FGDs, which may have contributed to fatigue, disengagement, or distress. Academic partners and patients suggested keeping FGDs limited in time to participate after work. As a result, the sequence of administering the questionnaire was given attention, but most frequently, the approach remained routine in moving from informed consent to the demographic survey, followed by the main FGD scripts, which were simplified. Data Collection Procedures Every FGD began by introducing the MSD curriculum as a program designed to help individuals with T2D maintain their blood sugar levels within a healthy range [20]. The FDGs were conducted at a central location at the convenience of the participants. This location included a local primary school and a community-owned multipurpose building. In order to promote a more prosperous and open dialogue, the FGDs with patients were stratified by education level: one FGD involved primary-level patients and two involved secondary-level and higher education patients. FGDs were not stratified based on age, since the majority of patients were over 40 years of age. The community and academic partners were not stratified by gender, as it was deemed culturally acceptable for men and women of all ages to engage in the discussion. Facilitators reassured respondents that identifiable responses would not be disclosed. Additionally, they were trained on inclusive facilitation techniques to ensure all respondents' active participation. Data Analysis FGDs were audio-recorded and transcribed verbatim from French into English. A bilingual French and English interpreter reviewed the transcripts for accuracy. Data analysis was initiated in the field, with debriefing meetings held following each FGD. With the help of the RAs, the lead author reviewed the collected data, identified new areas to explore further, and discussed topics that were saturated. Detailed field notes were taken on each FDG. Formal analysis was conducted at the end of data collection using content analysis [33] and was driven by deductive and inductive processes using MAXQDA 2020 (VERBI Software, Berlin, Germany, 2019) [34]. In the first step, a matrix of codes based on semi-structured FGD guides was developed, covering the seven components of MSD. Next, two independent researchers reviewed each transcript individually to check whether the codes were appropriate and identify recurring themes that were not part of the initial matrix. Finally, the researchers reviewed the codes produced by the process, agreed on the final codes, and grouped and categorized the codes accordingly. Researchers coded the first 50 lines of the scripts for all the FGDs using the coding system, assessing its relevance and consistency between researchers and determining whether the coding system required any modification. Disparities identified by the researchers were resolved through consensus, and more codes were added when deemed necessary. By the end of the process, all the scripts were coded using the finalized coding system. The codes were summarized by intervention components and participant groups. The findings were then compared between the three respondent groups [35]. The results of the study are reported according to the standards for qualitative research reporting (Supplementary Material 2) [36]. Demographic information was first entered on Microsoft Excel and exported to Stata, a quantitative analysis software (StataCorp, version 14), for tabulation. Results In total, seven FGDs with 60 participants were held (Table 1). Participants included T2D patients aged 44 and over (n = 3 FGs; 32 participants), community partners over 35 years old (n = 2 FGs; 12 participants), and academic partners (n = 2 FGs; 16 participants). The majority of participants in all three groups were married. The academic groups, however, were dominated by men, while women dominated the community partners. Finally, most academic partners worked for governments, whereas community and patient partners were mostly non-government employees. Several sociocultural and environmental factors that may impact MSD intervention delivery were identified. The derived themes informed the cultural adaptation of each component of the MSD, as illustrated in Table 2. Healthy Eating Participants discussed multiple challenges to healthy eating, including access to healthy food, cultural norms and perceptions, and gender roles. All participants emphasized the relevance of information on locally available foods. Access to healthy food. Participants mentioned cost, seasonality, and distance to healthy food markets as limitations to accessing them. These factors tend to be intertwined, as people need to travel farther to buy more affordable and healthy food. Cultural perception and norms. Along with the access to food, another recurring theme that was resonated among the participants was cultural perception and norms, especially concerning fruits and vegetables. Notwithstanding their nutritional benefits, vegetables and fruits have been considered commodities of the poor. "People think that eating vegetables is just being poor." (Community partner) "Even though it is fruit season, people do not take it. Plenty go to the trash. For example, when you go to the north of the country, no one looks at mangoes." (Community partner) "Well, because fruits and vegetables are not part of our culture. They are not food that can fill us up." (Community partner) Gender role. Specifically, among patient FDGs, the process and act of cooking was associated with gender, suggesting the importance of engaging women in the curriculum, especially for the healthy eating component. "Usually, our wives cook. My wife prepares for the house, goes to the market, and decides what to eat in the households." (Patient) Lack of information on local foods and recipes. The participants across the groups echoed the lack of culturally relevant information based on locally available food options. "Recipes based on our local foods will be helpful." (Academic partner) "We need to focus on local diversity. Our local diversity is rich. You have to inform people. Not everything is expensive. People do not have the information of what is available." (Academic partner) Physical Activity and Exercise The responses from the participants indicated that group support and neighborhood characteristics might facilitate engagement and participation. The respondents also voiced their preferences for easy and enjoyable activities to mitigate some of the patients' challenges and guides for physical activities. Group support and neighborhood characteristics as a potential facilitator for engagement and participation: Across all groups, many respondents commented that motivation and sustained participation in physical activities would mainly depend on within-group support. Specifically, the patient FGDs noted that neighborhood characteristics might determine participant engagement. Finally, many participants echoed that forming a group that functions as a sports club will improve engagement and participation in exercises and physical activities. "If we are all in the same geographic area, it will be easier. It is said that it is unnecessary to join a club to do sport. Why don't we form this club?" (Patient) "The support among us is better than a family who does not have diabetes and does not know about the disease." (Academic partner) Preferences for easy and enjoyable activities. The majority of participants emphasized that they would prefer physical activities that are easy to follow due to the factors such as age, pre-existing conditions, and fatigue from work, and would prefer activities that are enjoyable to keep them motivated. "Easy activities because most diabetic patients are old or overweight." (Community partner) "Not complicated activities because they complain that they are always tired." (Academic partner) "Pick activities they will enjoy. Most adults need exercise to be fun, or they lose their motivation to do it over time." (Community partner) Lack of guides for physical activities: During the academic partner FGDs, participants noted that manual guides for physical exercise would provide examples of exercises for those who may not have adequate knowledge of which activities they could engage in. "If they can have some examples of physical activities, it will help a lot, specifically for those who do not have any activity." (Academic partner) Regular Blood Sugar Monitoring Monitoring the blood sugar level seems to be often obstructed due to the limited availability of personal glucose monitoring test kits in the market, financial burden, and perspectives toward self-directed glucose monitoring. Availability of medical equipment in the market. Participants, especially from the patient group, voiced their concern about the availability of medical equipment required to monitor blood sugar levels, such as strips and glucose meters. "I have five blood sugar meters. When I take one, there are no strips on the market, and I have to buy another." (Patient) "There are different brands. A pharmacy sells glucometers, and they do not even have corresponding test strips." (Academic partner) Financial burden. One of the barriers to monitoring blood sugar levels that recurred among respondents was financial concerns. However, participants noted that support for the medical supplies could motivate participation in future programs and events. "I do not have a monitor, so I go to the hospital for my blood sugar when I have the money." (Patient) "I will suggest not giving money directly but the devices, the glucometers that everyone needs to have at a lower cost." (Patient) "I think that incentives such as free materials to use glucometers, blood tests, etc., can also motivate them." (Academic partner) Perspectives toward self-directed glucose monitoring. Participants also voiced that, in general, most people are not culturally accustomed to self-directed glucose monitoring and noted that this perspective might be a potential barrier to regular blood sugar monitoring. "They are afraid of sting/puncture, and it hurts." (Community partner) "Glucometers, they do not use. They are afraid of blood." (Community partner) Adherence to Treatment Guidelines The focus groups' responses indicated that financial burden seems to be one of the most daunting challenges that impedes medication adherence. In addition, some participants expressed a lack of information on medications, while academic partners confirmed that they often do not have time during clinical consultations to explain and allay concerns about medications and other treatments for diabetic patients. Financial burden. Participants discussed the financial burden and the significance of financial support for treatments and medical supplies to manage diabetes. "Some complaints about the burden of the cost of drugs." (Community partner) " . . . there are drugs for treatment. For example, Diamicron costs 10,000 CFA and depending on the prescription, you can spend more than that a month. It will be great to have help with drugs." (Patient) "Others are also afraid of going to the hospital because of the money. Therefore, they go to the traditional therapist." (Community partner) Insufficient information during doctors' consultation. Doctors often find themselves without sufficient time to provide a detailed consultation, potentially leading to a lack of knowledge and awareness of adequate medication adherence. "Although doctors advise when it comes to medications, they do not have time to address some misconceptions about medications or explain their side effects." (Academic partner) "I am told, if they inject insulin, it is not good for your body. Therefore, the doctor told me to never go on insulin and to continue with the drugs." (Patient) Problem Solving to Maintain Blood Sugar Levels in the Targeted Range Patients with personal monitoring kits tend to manage blood sugar levels, quickly adjusting their diets accordingly. However, insufficient information on recommended diets and glycemic index of foods and poor portion control of foods, and difficulty making healthy choices when eating out seemed to be an impediment, particularly for those without the kits. Maintaining optimal blood sugar levels enabled by access to the monitoring kits. Patients, especially those with the monitoring kits, were more likely to monitor blood sugar levels and adjust their diets regularly. They were able to set the goal for themselves to maintain sugar levels in the targeted range. Reducing Risks for Diabetes-Related Complications Although respondents across the groups could provide their perceptions and experiences concerning the other themes such as foods, physical activities, and monitoring blood sugar levels, little was discussed and emerged concerning this component. Part of the reason could be attributed to a lack of awareness and knowledge, as hinted in the response below. "Diabetes affects many people in Cotonou. They die without knowing the reason. Most people rarely go to the hospital and many are sick all the time. Most of them have foot amputations. Also, people do not know what to do with the disease." (Community Partner) Coping with Stress Negative perceptions toward diabetes, which may be a potential barrier to healthy coping with stress, were mentioned universally across the groups. The coping mechanisms that emerged included denial, religion, and focus on what is controllable. Negative perception toward the disease. Of particular interest is the perception toward the disease, especially among the patient groups. Most of them perceived diabetes as a terminal disease with its complications as fatalistic. Diabetes was often associated with negative words such as the end of life and death, as stated below. Participants across the three focus groups mentioned that people who have diabetes might suffer additional social stigma issues. Participants had convergent thoughts that patients' social stigma makes them reluctant to have open discussions about diabetes. It implies a greater perceived burden of the disease in the context, potentially leading to increased stress. "When they are told about diabetes, some parents think that we are already at the end of life and therefore that spending on drugs may be useless." (Patient) "Even before the illness, I am told that it is the disease of the rich. These are the people who eat rich dishes, who drink champagne. That is why today, many people with diabetes do not like being known as such." (Patient) This statement resonated with other participants as well. Compared to other chronic diseases such as hypertension, participants reported a higher perceived severity for diabetes' prognosis. "When you say you are hypertensive, it does not bother anyone. However, when you say you have diabetes, it is as if you will die tomorrow. You no longer have the support of the family." (Patient) Denial. Potentially due to the negative perceptions and social stigma associated with the disease, some patients seemed reluctant to accept the status, ignoring the fact that they have the disease. "I know it is a disease. However, to say that I have diabetes is like trying to give myself a name. I know I have diabetes, but I do not want to be told that I have diabetes." (Patient) "The problem is that people do not want you to know they have diabetes. They think they are going to die anyway. Some do not care and do what they want." (Community partner) Roles of religion. When asked how they manage stress, religion seems to serve as a coping mechanism to drive away negative feelings or impulses. "I am happy that we are all Christians. Therefore, on the psychological side, we do not need help. Because if we are called Christians, we will not say that we are tired or want to kill ourselves or isolate ourselves from the world." (Patient) "What is depression? For me, it is a matter of faith. There is a supreme being." (Patient) Focusing on what is controllable. Another strategy that participants utilize to cope with stress included focusing on what is controllable. For example, rather than focusing on negative feelings, patients were actively engaged in physical activities or monitoring the blood sugar level to distress during difficult periods. "What I do is to control my blood sugar almost every morning. When I have bad results, I tell myself that I have not done what I have to do. I run with friends." (Patient) Discussion All too often, self-management interventions are not adequately matched to the intended target audience's characteristics and circumstances in sub-Saharan Africa (SSA) [13][14][15]. Thus, tailoring MSD, a diabetes self-management education program, to meet the target population's needs will likely increase usability, appeal, and effectiveness. As a result, this study will contribute to the development of a culturally sensitive program based on information specific to Benin. To the best of our knowledge, this is the first study to examine specific cultural beliefs, attitudes, behaviors, and environmental factors relevant to the adaptation of a self-directed diabetes intervention in Benin from the perspective of patients, academics, and community partners. We found broad consensus between the three groups. Patients, however, tended to give reasons for their behavior, whereas academic and community partners reported salient behaviors of patients. As Al Slamah et al. found in Saudi Arabia and Yeary et al. found in Marshall Island [37,38], sociocultural beliefs significantly influenced people with diabetes to accept their diagnosis and, thereby, participate in self-management interventions to improve disease outcomes. Similar to the literature [39], this study reveals gaps between local perceptions and evidence-based nutritional research in healthy eating and access to healthy food. Cultural norms and gender roles influenced local perceptions. Specifically, despite their nutritional benefits, fruits and vegetables are not part of the dietary habits of the studied communities and are considered commodities of the poor. By reducing barriers such as lack of knowledge and skills to translate the recommendations into practice, a culturally appropriate diabetic meal plan will make it easier for people to adhere to the recommended diet. Consideration should also be given to environmental factors in Cotonou regarding healthy food, such as access, availability, and acceptability. Additionally, it will be necessary to consider crucial recommendations, such as choosing low-glycemic index foods and incorporating local foods. It will also be essential to include resources that facilitate adherence to the menus, such as recipes, cooking tips, a list of local foods, and where to obtain them, as suggested by Asaad et al. [40]. Finally, women were mentioned as having a significant role in food selection and preparation, indicating the importance of involving women in the intervention, as observed by Baig et al. [41]. All participants saw physical activity as critical to improving their diabetes outcomes. However, they noted that interventions to increase exercise would be more effective if patients received good social support, especially peer support and modules with enjoyable exercise routines. This finding is similar to Thomas et al.'s [42] conclusions that people with diabetes do not participate in physical exercise because of the difficulty of engaging in strenuous exercises and the distraction of other entertainment activities like watching television. In addition, busy schedules, inadequate structural facilities, and lack of time were perceived as barriers. The preference for physical activities in groups is in line with the Beninese cultural valuation of community-oriented activities [43]. Overall, interventions that promote physical activity should be flexible and adaptable to the environment of the participant group and must be enjoyable. A patient's ability to self-manage diabetes often hinges on adherence to treatment, good coping skills, and problem solving when their blood sugar targets are not being met. As previously observed in SSA [44,45], financial hardships impair patients' ability to regularly monitor and maintain their blood sugar levels and adhere to medications and medical equipment. Therefore, a financial incentive could motivate them to maintain an optimal blood sugar level [46]. Specifically, access to personal monitoring kits can help patients regularly monitor and promptly adjust their diet based on the values they observe. However, isolated, vertical funding for diabetes alone may not be sustainable. A potential pathway to universal health coverage would protect beneficiaries, maximize financial protection, and prevent catastrophic health costs [47]. Despite the rising popularity of universal healthcare in policy discussions, less than one percent of Benin's population is covered under these schemes [28]. A recent report from Kenya, Turkey, Mexico, Thailand, and China demonstrated early successes with the universal health coverage scale-up initiative [48]. Finally, yet importantly, participants used their cultural and religious beliefs to cope in stressful situations, suggesting that these beliefs could be used to improve coping and problem-solving skills. This study also sheds light on negative social perceptions and stigmas associated with diabetes, which have significant implications, since they may lead to self-denial, hinder health-seeking behaviors, and negatively affect program acceptance. The findings indicate a need to sensitize the population and increase its awareness of the disease. Further attention should be given when publicly communicating the intervention to avoid unexpected adverse outcomes. Our study offers little evidence concerning reducing diabetes-related complications, reflecting a lack of awareness in the region regarding the specific complications and risk reduction [39]. As previously observed in SSA [49], many people with diabetes do not like to share their symptoms with other family members for fear of burdening them. Diabetes is often seen as a punishment by God and related to fatalistic beliefs. Thus, spiritual beliefs and discourses could be leveraged when delivering an intervention to encourage, motivate regular self-care, and reduce complications. Motivational support (both spiritual and emotional) is necessary, as psychological stress adversely affects health outcomes, mortality, and quality of life [50,51]. The study has several strengths and limitations. First, this study utilized qualitative comparison groups to elicit multiple perspectives involving patients, community partners, and academic partners [52]. Second, we were able to capture perspectives from diverse groups by using our unified interview guide. Third, in addition to demonstrating the need to adopt an evidence-based diabetes self-management curriculum to specific contexts, this study highlights the importance of considering contextual nuances. Fourth, we identified several barriers and facilitators that are common among different demographic groups. However, social desirability bias may have occurred if participants felt they were unable to express personal barriers. Another potential limitation is self-selection bias, because perhaps participants, by taking part in the study, were more open about sharing their experiences than those who did not; thus, the findings from the data may not represent the entire population. Additionally, we did not use a theoretical framework to direct our data collection and analysis. The FGDs questions were derived from the literature and informed by the MSD curriculum to understand the cultural factors that might influence the implementation of MSD. Finally, because this study was conducted in a specific region, its findings may not be generalized to other parts of the country. Implications for MSD Intervention The findings of this study has implications for the success of the MSD program in Benin. Recommendations emerging from this study include the following: • The MSD program must incorporate a meal menu plan that complies with the Benin nutrition therapy guidelines while taking into account factors such as access, acceptance, and accessibility of local foods. Additionally, the curriculum should include resources that facilitate adherence to the menu, such as recipes, cooking tips, a list of local foods and sources, and information on low-glycemic index foods. • A revised curriculum could also gradually incorporate nutritious fruit snacks into the daily diet, introduce healthy foods patiently (e.g., vegetables), and ensure that portions are reasonable based on local bowls. • Considering the DSME component of physical activity, findings suggest that Beninese physical activities should be based on group support and consist of easy and enjoyable activities like dancing to local music. • Integrating glucose monitoring strategies within a collectivistic family framework will be essential for the MSD program to succeed. This strategy will involve the entire family in helping persons with diabetes monitor their blood sugar levels. • Due to the lack of information regarding diabetes medications, clear guidance on how the medications work and are developed from traditional medicine will be imperative. Education on continuing medication will also be crucial, since Benin has a low compliance rate. • Patients were reluctant to tell other family members about their diabetes symptoms for fear of burdening them. To combat these misconceptions, it will be crucial to stress the importance of being open with one's family about diabetes symptoms for participants to ensure their long-term well-being. Early detection of symptoms and treatment can prevent significant problems and stressors in the long term. • Patients with T2D also described the fatalistic belief that diabetes is God's punishment. However, given the context of Benin, where religion plays a key role, spirituality and faith could be used to counter fatalistic beliefs. • Given patients' reticence to express their emotions, particularly outside of families, a MSD intervention must foster communication and trust (e.g., by being genuine with families and willing to help) to allow for discussions about stress and how to cope with it. • Moreover, in addition to identifying specific cultural aspects of each DSME element, it will be essential to consider the matriarchal culture in Benin with distinct gender roles, since women are heavily involved in food preparation. Conclusions Ultimately, we can conclude that this study contributes in several ways to our understanding of the role of sociocultural norms, perceptions, and reality in implementing an evidence-based diabetes self-management education curriculum and provides a basis for adaptation to make it more effective. Additionally, this study fills a significant knowledge gap related to barriers and facilitators to the seven essential self-care behaviors in urban communities of Benin. Future intervention strategies should target the identified factors and emphasize the overlapping themes from our study and previous studies. Adding awareness of local foods and practices, adjusting cooking practices, and addressing stigma and gender roles associated with diabetes could be helpful when working with the surveyed communities. Additionally, as well as showing favorable views from key stakeholders about diabetes self-management programs, the findings highlight gaps in the healthcare system. We contend that this program could inform a more contextually specific self-management diabetes intervention, alleviate many of the challenges currently faced by diabetes care providers in Cotonou, and address the rising prevalence of T2D in the country.
2021-08-28T06:17:20.932Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "f99adb14904df306602892b7ff248c383d5af574", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijerph18168376", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78290e0e58acf2b2b11037856f723ebe9f96ce20", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247058815
pes2o/s2orc
v3-fos-license
Stokes drift and its discontents The Stokes velocity uS, defined approximately by Stokes (1847, Trans. Camb. Philos. Soc., 8, 441–455.), and exactly via the Generalized Lagrangian Mean, is divergent even in an incompressible fluid. We show that the Stokes velocity can be naturally decomposed into a solenoidal component, usolS, and a remainder that is small for waves with slowly varying amplitudes. We further show that usolS arises as the sole Stokes velocity when the Lagrangian mean flow is suitably redefined to ensure its exact incompressibility. The construction is an application of Soward & Roberts’s glm theory (2010, J. Fluid Mech., 661, 45–72. (doi:10.1017/S0022112010002867)) which we specialize to surface gravity waves and implement effectively using a Lie series expansion. We further show that the corresponding Lagrangian-mean momentum equation is formally identical to the Craik–Leibovich (CL) equation with usolS replacing uS, and we discuss the form of the Stokes pumping associated with both uS and usolS. This article is part of the theme issue ‘Mathematical problems in physical fluid dynamics (part 1)’. Introduction Surface gravity waves induce a rectified motion of fluid particles and thus a wave-averaged difference between the mean Eulerian velocity, u E , and the mean Lagrangian velocity u L [1,2] u L = u E + u S . vortex force and the Stokes-Coriolis force, in the wave-averaged momentum and vorticity equations [3][4][5][6][7]. The Stokes velocity can be defined exactly at finite wave amplitude using Generalized Lagrangian Mean (GLM) theory [8,9]. This exact GLM u S is rotational and compressible, even if the underlying fluid motion is irrotational and incompressible [10]. Expansion in powers of a wave-amplitude parameter produces the standard approximation [2,11] to the Stokes velocity u S = (ξ 1 · ∇) u 1 . (1. 2) The overbar in (1.2) denotes a running time mean, or phase average. In (1.2), u 1 is the linear (first order in ) velocity of the wave and the associated displacement ξ 1 is defined by ∂ t ξ 1 = u 1 and ξ 1 = 0. (The subscript 1 indicates the first-order fields throughout.) The small-amplitude approximation to u S in (1.2) is also rotational and compressible: assuming only that ∇ · ξ 1 = 0, McIntyre [10] shows from (1.2) that The time derivative of an averaged quadratic quantity in (1.3) entails the same slow-modulation assumption that underlies the concept of group velocity and so introduces a second small parameter, μ. The Eulerian mean velocity, u E in (1.1), is incompressible and thus the divergent u S in (1.3) implies a divergent Lagrangian mean velocity. In figure 1, we illustrate the role of the two small parameters and μ, by considering a weakly nonlinear, slowly modulated two-dimensional packet of deep-water surface gravity waves. The Stokes expansion [1,10] is justified by the weakly nonlinear assumption that the wave slope is small: = ak 1, where k is the wavenumber and a is the amplitude of the surface displacement. In this example the slow-modulation parameter is μ = (k ) −1 1 where is length scale of the packet envelope. Figure 2 shows the motion of a fluid particle in the velocity field of this wave. Despite (1.3), and the exact results provided by GLM, some authors are reluctant to accept the reality of non-zero ∇ · u S . Moreover, discontent with ∇ · u S = 0 is sometimes confounded with unease over the vertical component of the Stokes velocity, w S =ẑ · u S . For example, rather than taking the vertical component of (1.2), McWilliams et al. [12] defines a 'vertical Stokes pseudovelocity' which, together with the horizontal components of u S , makes an incompressible threedimensional 'Stokes pseudo-velocity'. Mellor [13], while emphasizing that u S is divergent, is unwilling to accept a non-zero vertical component w S : 'a mean vertical drift is not acceptable'. In the view of concerns with the mean Panels (a,b) show the x-and z-displacements as functions of time; panel (c) shows the trajectory. In this computation, we assume that the depth d is much greater than the packet length scale so that the second-order Eulerian mean flow is negligible in the wave-active zone. (Online version in colour.) vertical drift w S it is reassuring that particle tracking velocimetry can be used to observe vertical Lagrangian displacements, w S = 0, beneath groups of deep-water waves [14,15]: the mean vertical drift is upward as a wave packet arrives and downwards as the packet departs. (For the wave packet in figure 1, the maximum vertical displacement resulting from w S is about 4.8 cm-this is not visible in figure 2.) After the passage of the packet a fluid particle returns to its initial depth. It is a net vertical displacement that is unacceptable: transient vertical motion, on time scales longer than a 10-s wave period, and shorter than the 100-s packet transit time, is not a concern. Mellor critiques the Craik & Leibovich ([3,4]) vortex-force formulation of wave-mean interaction by arguing that CL and subsequent authors incorrectly assume that the divergence of u S is zero. A different interpretation is that CL and many followers assume that the wave field has no temporal modulation so that the right of (1.3) is conveniently zero. For example, McWilliams & Restrepo [7] claim to prove ∇ · u S = 0. But examination of this argument shows that [7] assumes that there is no temporal modulation of the wave field. This raises the issue of whether the CL formulation is incomplete or misleading in situations with temporal modulation of the wave field, e.g. in ocean observations [16] and in modelling the growth of swell [17]. In this paper, we revisit the concept of Stokes velocity. For surface gravity waves, we exhibit a natural Helmholtz decomposition of u S in (1.2) and argue that the solenoidal component, u S sol , can advantageously replace u S in most situations. We emphasize that the familiar form (1.2) of the Stokes velocity is not unique but depends on a specific definition of the Lagrangian-mean flow, royalsocietypublishing.org/journal/rsta Phil. Trans. R. Soc. A 380: namely the GLM definition of Andrews & McIntyre [8]. An alternative definition, proposed by Soward & Roberts [18] and closely related to classical averaging and its Lie series implementation (e.g. [19,20]), leads to a solenoidal Lagrangian-mean velocity with u S sol as the corresponding Stokes velocity. This alternative definition, known as 'glm' but better characterized as 'solenoidal Lagrangian mean', has the added benefit of coordinate independence in any geometry, unlike standard GLM (see [21] for other coordinate-independent definitions of the Lagrangian mean). We show that, for surface gravity waves, the associated Lagrangian-mean momentum equation governing the dynamics of the Eulerian mean flow is the CL equation with u S sol replacing u S . The difference between GLM and glm is vividly illustrated with an example proposed by O. Bühler (personal communication, 2021). Consider a bucket of initially motionless water. If the water is agitated, for example, by pressure forcing at the surface, then the potential energy of the water is increased, or equivalently the centre of mass of the water is elevated above its initial height. The fluid at the bottom of Bühler's bucket, however, cannot move in the vertical and so any definition of 'Lagrangian mean' that tracks the position of the centre of mass of the fluidsuch as GLM-will be divergent in this situation, even though the velocity of the water is entirely incompressible. Conversely, any definition of 'Lagrangian mean' resulting in a strictly solenoidal Lagrangian mean velocity-such as glm-cannot track the centre of mass of the fluid. The plan of the paper is as follows. In §2, we sketch the derivation of the standard form (1.2) of the Stokes velocity, give its Helmholtz decomposition, and show how a simple modification of this derivation, implementing an alternative Lagrangian-mean flow definition, naturally brings about the velocity u S sol . In §3, we examine the respective role of u S and u S sol in Stokes pumping, which is the mechanism whereby the horizontal divergence of the Stokes transport drives an Eulerian mean flow. In §4, we show how the glm approach enables the systematic construction of solenoidal Lagrangian-mean and Stokes velocities up to arbitrary algebraic accuracy in . We explain how Lie series provide both an interpretation and an efficient implementation of this construction, and we derive the glm version of the CL equations. Section 5 gives the conclusion. The Stokes velocity u S and its solenoidal part u S sol (a) Derivation of the Stokes velocity We start by recalling the traditional derivation of the Stokes velocity in (1.2). The position x(t) of a fluid particle is determined by solving where α is a wave phase, regarded as an ensemble parameter. The fluid velocity u(x, t, α, ) is incompressible, ∇ · u = 0, and has the form where is the wave amplitude parameter. The leading-order term u 1 (x, t, α) is a fast wavy flow, so u 1 = 0, where the mean, denoted by the overbar, is an average over the phase α. To average the fast wave oscillations in (2.1), we consider the ansatz is the slow motion of a Lagrangian mean position. Think of x L as a 'guiding centre' such that rapid wavy oscillations are confined to the displacements ξ n ; these displacements from x L do not grow with time, i.e. all members of the ensemble remain close to the guiding centre x L . The motion of x L is written as where u L (x, t, ) is the Lagrangian mean velocity, yet to be defined and determined. The ansatz (2.3b) is ambiguous because requiring only that the ξ n 's do not grow with time does not uniquely determine u L and ξ n . We return to this point below. Substituting (2.3b) and (2.5) into (2.1) and (2.2) and matching powers of at the first two orders results in Choosing to follow Stokes [1] and Andrews & McIntyre [8], one disambiguates (2.3b) by requiring that ξ 2 (x L , t, α) = 0. In this case, averaging (2.6b) produces the familiar result In (2.7) we have now omitted the subscript 2 on u L . With (2.7) we recover (1.2) and the small wave-amplitude version of (1.1). (b) The solenoidal Stokes velocity u S sol Let us examine the divergence of u S in more detail. The 'un-averaged Stokes velocity' can be written exactly as In passing from (2.8a) to (2.8b), we have used wave incompressibility to simplify the standard vector identity for the curl of the cross product u 1 ×ξ 1 . The average of (2.8b), identifies a solenoidal part of the Stokes velocity as The solenoidal vector u S sol is the incompressible part of the Stokes velocity for all types of weakly nonlinear waves in an incompressible fluid. We propose that u S sol can advantageously replace the traditional form of the Stokes velocity (1.2) in many circumstances. The solenoidal Stokes velocity arises naturally if a small change is made in the derivation in §2(a): suppose we decide from the outset to change the definition of 'Lagrangian mean' so that (rather than ξ 2 = 0). Averaging (2.6b) then results in The incompressible Lagrangian mean velocity in (2.12) is an alternative to the traditional compressible u L in (2.7). The O( 2 ) shift ξ 2 in the position of guiding centre x L (t, ) produces Lagrangian mean and Stokes velocities that are divergence free at order 2 . In §4, we discuss a systematic framework-Soward & Robert's [18] glm alternative to GLM-that generalizes this property to arbitrarily high order in . Note that the difference between u S and u S sol is a time derivative so has no impact on particle dispersion for waves that are represented by stationary random processes as discussed by Holmes-Cerfon & Bühler [22]. Stokes transport and Stokes pumping Discussions of the deep return flow associated with a surface-gravity wave packet argue [2] that: 'the return flow · · · can be explained as the irrotational response to balance the Stokes transport · · · that acts to "pump" fluid from the trailing edge of the packet to the leading edge'. Similar sentiments are expressed in [23,24]. With this physical picture in mind, it is instructive to compare the Stokes pumping associated with u S with that of u S sol . The interpretation of the return flow in the quotation above is most consistent with u S sol . (a) Stokes pumping and u S sol = ∇× 1 2 u 1 × ξ 1 We start with the easy solenoidal case, with Stokes transport where the vertical integration above is from the bottom of the wave-active zone (denoted −∞) to the mean sea surface at z = 0. (In this section, we confine attention to deep-water waves so that the lower limit, −∞, is well above the distant bottom.) Vertical integration of ∇ · u S sol = 0 over the wave-active region produces the unsurprising result where we use 0 to denote evaluation at z = 0, e.g. w S sol 0 = w S sol (x, y, 0, t). The result (3.2b) is consistent with the idea that horizontal convergence within the wave-active zone pumps fluid downwards, out of the wave-active zone, with the vertical velocity w S sol (x, y, 0, t). Vertical integration of u S sol = 1 2 (ξ 1 · ∇)u 1 − 1 2 (u 1 · ∇)ξ 1 over the wave-active zone results in is a streamfunction for horizontally circulating Stokes transport. This horizontal Stokes circulation is previously unremarked, perhaps because the terms involving χ in (3.3a) and (3.3b) are order μ smaller than the other terms. Using the coordinate expressions in (3.3a) and (3.3b), the solenoidal pumping can be expressed entirely in terms of surface quantities: (b) Stokes pumping and u S = (ξ 1 · ∇)u 1 We turn now to the traditional definition of the Stokes velocity. Define the Stokes transport via Because ∇ · u S = 0 there is no analogy to (3.2b). Instead, integrating (2.15) over the wave-active zone With vertical integration of the horizontal components of u S over the wave-active zonê (3.10) In the final term, the indices α and β are 1 and 2. Eliminating ∇ · T S between (3.8) and (3.10) The results in (3.8), (3.10) and (3.11) are all more complicated than their solenoidal cousins in (3.2b), (3.3a) and (3.3b). (c) A comment on approximations to the Stokes transport A widely used expression for the Stokes transport is Expression (3.12) is exact for a uniform (μ = 0) progressive wave and is a leading-order approximation for a slowly modulated (μ 1) wave packet. To see the connection with the more general and exact results above, align the x-axis with the horizontal wavevector so that η 1 = v 1 = 0; thenŷ · T S =ŷ · T S sol = χ = 0. Thus, with an error by order 2 μ, and in agreement with (3.12), In other words, (3.12) is a leading-order approximation and at this order T S and T S sol are identical. glm We return to the definition of the Stokes velocity and show how the glm theory of Soward & Roberts [18] rationalizes and generalizes the heuristic construction of the solenoidal velocity u S sol . In §2(a), we emphasized the ambiguity in the decomposition (2.3a) of trajectories into a mean part x L and a perturbation ξ . GLM resolves this ambiguity by imposing that While (4.1) is widely accepted, it is neither inevitable nor particularly natural. It implies that the Lagrangian-mean trajectory of a particle is defined by the equality x L = x between the coordinates of the Lagrangian-mean position and the average of the coordinates of the particle. This indicates that the Lagrangian-mean trajectory and, as a result, the Lagrangian-mean velocity u L depend on a choice of coordinates. This undesirable feature of GLM is remedied by glm, while also ensuring that u L is non-divergent (see [21] for other alternatives to GLM). (a) Formulation To introduce glm, it is convenient to rewrite equation (2.1) governing the fluid trajectories in terms of the flow map ϕ(x, t, α, ) giving the position at time t of the fluid particle initially at x. t, α, ) = u(ϕ(x, t, α, ), t, α, ). (4.2) The decomposition of trajectories into mean and perturbation is best written as the composition ϕ = Ξ • ϕ L , (4.3) or, more explicitly, ϕ(x, t, α, ) = Ξ (ϕ L (x, t, ), t, α, ). Here ϕ L is the (α-independent) mean map, sending the initial position of particles to their mean position, and Ξ is the perturbation map, sending the mean position to the exact, perturbed position. The perturbation map can only be represented in the familiar form x → x + ξ (x, t, α, ), as in (2.3a), in Euclidean space, where positions x can be identified with vectors and added, or, in more general geometries, once a specific coordinate system has been chosen and x is interpreted as a triple of coordinates. The smallness of the perturbation, usually stated as |ξ | 1, translates into the requirement that Ξ is close to the identity map. We emphasize that the mean map ϕ L is not obtained from ϕ by applying an averaging operator: averaging is a linear operation that applies to linear objects, such as vector fields, but not to nonlinear maps such as ϕ. Instead, ϕ L is defined by imposing a condition on the perturbation map Ξ . The form of the Lagrangian-mean velocity u L , defined bẏ depends on this condition. The defining condition of glm is expressed as follows. The small parameter is regarded as a fictitious time, and the perturbation map Ξ is constructed as the flow at 'time' of a vector field, q say; that is, (4.5) and t is treated as a fixed parameter. The glm condition is then Although superficially similar to the GLM condition (4.1), (4.6) is fundamentally different in that it is an intrinsic statement, applicable to any manifold and independent of any coordinate choice. Moreover, the glm formulation defines an exactly divergence-free Lagrangian mean flow, by requiring that ∇ · q = 0 (4.7) to ensure that Ξ and ϕ L preserve volume and hence ∇ · u L = 0. Equation (4.6) generalizes the condition on ξ 2 in (2.12) which leads to u S sol as the Stokes velocity. We show this by solving (4.5) by Taylor expansion. In coordinates, we have that (4.8) The power series expansions (2.3b) for ξ and q = q 1 + q 2 + 2 q 3 + · · · (4.9) for q then give ξ 1 = q 1 and ξ 2 = q 2 + 1 2 q 1 · ∇q 1 (4.10) so that (4.6) implies (2.12). In the next section, we develop a systematic computation of u L and hence u S sol order by order in using Lie series. This removes the need to introduce ξ by focusing the perturbation expansion on q. (b) Lie series expansion The glm formalism can be regarded as an instance of classical perturbation theory, which approximates solutions to the ordinary differential equation the variable transformation and ϕ L represents the new variable. Lie series [19,20] provide a powerful tool for the systematic implementation of classical perturbation theory which we now apply to glm. Introducing the decomposition (4.3) into (4.2) and using (4.4) gives w + Ξ * u L = u, (4.11) where is the perturbation velocity and Ξ * is the push-forward by Ξ , with Ξ * u L = (u L · ∇)Ξ in Cartesian coordinates. Pulling back (4.11) gives We seek an -dependent Ξ to eliminate fast time dependence from u L order-by-order in , and formulate the problem in terms of the vector field q that generates Ξ according to (4.5). We impose that ∇ · q = 0 to ensure that ∇ · u L = 0, and the glm condition (4.6). Expanding q as in (4.9) we relate the various terms by differentiating (4.13) repeatedly with respect to and evaluating the results at = 0. Two key identities turn this into a mechanical exercise. The first, essentially the definition of the Lie derivative [25], is (4.14) where L q u = q · ∇u − u · ∇q is the Lie derivative of u along q. The second, is established in appendix A. Iterating (4.14) and (4.15) we find Introducing (4.16) and (4.17) into (4.13) gives at the first two orders in , ∂ t q 1 = u 1 , i.e. q 1 = ξ 1 (4.18) and on using (4.6). This provides the geometric formula for the solenoidal Stokes drift, equivalent to (2.11a) since L q 1 u 1 = q 1 · ∇u 1 − u 1 · ∇q 1 = ∇ × (u 1 × ξ 1 ). The expansion can be pursued to higher orders by choosing divergence-free q n that push the fast dependence on the right-hand side of (4.13) to order n+1 . This yields u L and hence u S sol to arbitrary order in . on the case of small-amplitude surface gravity waves. This derivation is conveniently carried out using a representation of the rotating Euler equation (4.21) with D t = ∂ t + u · ∇, in terms of the absolute momentum 1-form [21,26,27] ν a = u · dx + 1 2 (f × x) · dx. (4.22) It can be checked that (4.21) is equivalent to where π = p − 1 2 |u| 2 − 1 2 (f × x) · u, using basic properties of the Lie derivative, namely Leibniz rule, commutation with the differential d, and that L u = u · ∇ when applied to scalars [25]; see appendix A for details. Equation (4.23) can be thought of as a local version of Kelvin's circulation theorem. This is readily obtained by integration along a closed curve C(t) moving with u to find An advantage of (4.23) is that it leads directly to a Lagrangian-mean momentum equation of a similar form [21], (4.25) and to the corresponding Lagrangian-mean Kelvin's circulation theorem C L (t) ν L a = const, (4.26) where the closed curve C L (t) moves with the Lagrangian-mean velocity u L . Here the Lagrangianmean momentum and effective pressure are given by ν L a = Ξ * ν a and π L = Ξ * π. (4.27) The pull-back Ξ * by the perturbation map acts on scalars as a composition, e.g. (Ξ * π )(x, t, α, ) = π (Ξ (x, t, α, ), t, α, ), and commutes with the differential so that We show in appendix A that (4.25), together with the coordinate representation Ξ (x, t, α, ) = x + ξ (x, t, α, ) and GLM condition (4.1) recovers Andrews & McIntyre's Lagrangian-mean momentum equation [8, theorem I]. For glm, we can use the Lie-series expression (4.16) to obtain an expansion of ν L a as using that u = u 1 + 2 u 2 + · · · and q 1 = q 2 = 0. Introducing (4.29) into (4.25) and expanding the Lie derivative gives an evolution equation for the Eulerian mean velocity u E = u 2 supplemented by the incompressibility condition ∇ · u E = 0. in (4.29) against the other two in the average (recall that ∂ t q 1 = u 1 ). Moreover, using the irrotationality condition (2.13), we compute L q 1 (u 1 · dx) = (q 1 · ∇)u 1 · dx + u 1 · dq 1 = (q 1j u 1i,j + u 1j q 1j,i ) dx i Therefore, to leading order, the glm mean momentum equation reduces to with the glm Lagrangian-mean velocity in (2.12) (see [26,27] for an analogous formulation of the GLM CL equation). Using Cartan's formula in the form (4.32) the Lie derivatives in (4.31) can be written as where we do not detail the exact differentials. This makes it possible to rewrite (4.31) as for a suitable definition of the effective pressure . Equation (4.35) can be recognized as the CL equation [3,4,28] with the solenoidal Stokes velocity u S sol replacing u S . This is not surprising since the O( 2 μ) difference between u S sol and u S is of the same order as terms neglected in the derivation of the CL equation. (CL geared μ to by taking μ = 2 . In our derivation, this gearing is not necessary.) In fact the assumption μ 1 is only used to neglect the term (4.30) from the Lagrangian-mean momentum equation to obtain (4.31) and hence (4.35). Restoring this term leads to the generalization of the CL equation valid for μ = O(1). Since L u L and d commute, L u L d(∂ t |q 1 | 2 ) = d(· · · ) and the additional terms involving |q 1 | 2 can be absorbed in the differential of the pressure-like term on the right-hand side. We conclude that the CL equation (4.35) with u S sol as Stokes velocity holds for μ = O(1) provided that the effective pressure is suitably redefined. Conclusion This paper examines a problematic aspect of the Stokes velocity u S , namely its divergence ∇ · u S and non-zero vertical component w S . Some confusion has arisen because ∇ · u S and w S are small when the wave field has an amplitude that varies on scales longer and slower than the wavelength and period. This scale-separation approximation corresponds to the existence of a small parameter μ 1, as is the case for slowly varying wavepackets. The distinction between approximate and exact results in the literature is not always made plain. Beyond this, there are no irretrievable difficulties with the familiar form (1.2) of u S : when ∇ · u S and w S cannot be neglected, they are readily computed in terms of the first-order fields. The main point of the paper, however, is that the Stokes velocity, understood as the difference u S = u L − u E between Lagrangian-and Eulerian-mean velocities, is not uniquely defined and that an alternative version, u S sol in (2.11), that is exactly solenoidal can serve as a convenient substitute for the familiar (1.2). The non-uniqueness arises because there is no single definition of the Lagrangian-mean velocity u L , which is only constrained to serve as a good 'guiding-centre' Expanding, we find (D t u) · dx + u · du + 1 2 (f × u) · dx + 1 2 (f × x) · du = −∇p · dx + u · du + 1 2 (f × dx) · u + 1 2 (f × x) · du, ( A 5 ) which recovers (4.21) since (f × dx) · u = −(f × u) · dx.
2022-02-24T06:48:06.484Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "17d87f6e1e93f555f0261e3f379bbff1169032f3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4f0364fc3450c71acf5b090741e5f8fa879d0ca1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
251371560
pes2o/s2orc
v3-fos-license
Petri Nets for Concurrent Programming Concurrent programming is used in all large and complex computer systems. However, concurrency errors and system failures (ex: crashes and deadlocks) are common. We find that Petri nets can be used to model concurrent systems and find and remove errors ahead of time. We introduce a novel generalization of Petri nets with nondeterministic transition nodes to match real systems. These allow for a compact way to construct, optimize, and prove computer programs at the concurrency level. Petri net programs can also be optimized by automatically solving for maximal concurrency, where the maximum number of valid threads is determined by the structure of the Petri net prior to execution. We discuss an algorithm to compute the state graph of a given Petri net start state pair. We introduce our open source software framework which implements this theory as a general purpose concurrency focused middle-ware. I. INTRODUCTION A. What is a Petri Net Petri nets are named after the inventor Carl Petri in 1939 for the purposes of describing chemical processes [12]. A Petri net is a bipartite directed graph with tokens assigned to nodes. A Petri net operates by letting tokens move following a set of rules. The two partitions of nodes in a Petri net are called place nodes and transition nodes. As a bipartite graph, place nodes link to transition nodes and vice versa but place nodes do not connect to each other and neither do transition nodes. Only place nodes can hold tokens. A place node can hold any number of tokens. Each directed edge has a positive integer weight. A transition node becomes enabled if all of the place nodes that point to it have more tokens than the weight on their corresponding edges. A transition node will fire once it is enabled. Upon firing, the transition node's incoming edges removes that many tokens from the corresponding place and the outgoing edges add that many tokens to the corresponding place. Note that the number of tokens is not necessarily conserved. For example, we model a chemical reaction with a Petri net in Figure 1. Here, the firing corresponds to the chemical reaction and the tokens keep track of the atoms and compounds. The state of a given Petri net is described by the allocation of tokens to its place nodes. Given a Petri net and a state, each possible next state may be found by applying enabled firing 1 https://github.com/MarshallRawson/nt-petri-net operations. From a starting state, a directed state graph can be constructed where each node is a state and each directed edge between states represents an enabled transition firing. We will call a Petri net and an initial state, which is an allocation of tokens to place nodes, a Petri net start state pair. A bounded Petri net start state pair has a finite number of nodes on its state graph and an unbounded Petri net start state pair has an infinite number of nodes on its state graph. We show the state graph of the above Petri net start state pair in Figure 2. We show an example of an unbounded Petri net start state pair in Figure 3 and its corresponding state graph in 4. B. Colored Petri Nets Colored Petri nets are a generalization of Petri nets described by C. R. Zeros in 1977 [18]. In a colored Petri net, each token has an associated color. Also, transition nodes are only enabled by sufficient tokens of certain colors as specified by the transition node. In the case that all tokens have the same color and transition nodes require the universal color, this is just a standard Petri net. Consider the example of a colored Petri net start state pair in Figure 5. In this net, transition node P 0 has only a blue token. Transition node T 0 requires a blue token and transition node T 1 requires a red token. So, in the given start state, only the transition T 0 can fire. Colored Petri nets which allow an infinite number of possible colors are Turing complete [11]. However, even colored Petri nets with a finite number of colors can encode logic concisely. Consider the colored Petri net start state pair Figure 5 where the only possible colors are red and blue. This colored Petri net can be expressed by the flow chart in Figure 6 C. Contributions In Section III, we propose a generalization of Petri nets called Nondeterministic Transitioning Petri nets (NT-Petri nets) which allow for transitions to nondeterministically fire according to arbitrary firing conditions. A NT-Petri net has a solvable state graph but allows the system designer to build systems with fewer elements on the graph. This is achieved by allowing each transition to have multiple state deltas determined by either hidden mechanisms or the particular distribution of tokens at the input places. We find that this greatly enhances designability of systems described with NT-Petri nets compared to standard Petri nets. arXiv:2208.02900v2 [cs.DC] 1 Oct 2022 (a) Initial state before "firing". (b) Ending state after "firing". In Sections IV and V, we introduce concurrent programs and discuss when they are useful or necessary for various system designs. Then we describe how the Petri nets model concurrent programs and contribute to the design process. We discuss how to concurrently execute a Petri net program. We also discuss the balance between concurrent and sequential execution of a given Petri nets from fully serialized to maximally concurrent. In Section VI, we introduce our open source software framework, available on Github.com, that enables system designers to construct, execute, analyze, and optimize concurrent programs as NT-Petri nets. The framework is written in the well-known language Rust [9] for efficiency and compiletime memory safety. Then we use our open source software framework to build an example NT-Petri net featuring several firing conditions with nondeterministic outcomes due to hidden internal states or race condition nondeterminism. In Section VII, we discuss a realistic use case for a NT-Petri net program. The use case is a processing pipeline with a feedback mechanism. The processing pipeline uses a pan tilt zoom camera and microphone to isolate video and audio of participants in a meeting. The feedback component of the processing pipeline is the pan tilt zoom commands that are fed back into the camera motor only after a later stage in the pipeline which determines which person to focus the camera on. The NT-Petri net description of the system allows us to prove that the pipeline will never deadlock given some assumptions about the computations taking place. The assumptions are that the computations in the transitions always eventually finish and never cause the program to crash. We II. RELATED PREVIOUS WORK A Petri net program is a computer program correctly modeled by a Petri net. A computer program is correctly modeled as a Petri net when the state of all concurrent elements are modeled with tokens and all state transitions, which map to computation happening on the tokens, are guaranteed to complete. In 1998, a visual based Petri net programming language was built by Usher, M. and Jackson, D. [17]. It was successful in that it allowed the user to build and execute a computer program as a Petri net. However, it had some short comings in that the Petri net programs were not analyzable, completely visual based, did not have color, and used a slightly different definition of Petri net to allow for native composition of multiple nets. While concurrent programming frameworks have been implemented and used for many years, there has always been a lagging behind of provability aspects of the final multiprograms [14]. This is because the frameworks built and used thus far have not adhered to provable theoretic specifications. However, Petri nets are a provable theoretic standard which a middle-ware or framework can implement. A framework similar to Petri nets has been designed with high throughput signal processing in mind as an implementation of the Kahn Process Network model [1]. While this framework is provably deterministic, it attempts to detect and resolve deadlocks at run time and needs to know that certain operations which are to be performed on the signal, are communicative in order to function, and is thus less general than Petri net frameworks. Perennial is a system designed to machine verify concurrent programs which can crash, but only for a specific subset of the Go language (Goose) [4]. In Perennial, the system designer provides source code, a specification, and a proof that the source code meets that specification where a computer program verifies the proof. This is a powerful system because it directly verifies concurrent source code which is allowed to crash, but it is currently language specific and has to implement versioned memory. A known practical issue with Petri nets is state explosion. State explosion refers to the exponential growth of the state space of a Petri net with respect to the number of nodes and tokens in the Petri net. However, there has been work done in the past to counteract this by analyzing sections of Petri nets to independently solve for local state graphs, which are combined together later [5]. Another known shortcoming of Petri nets is that they do not model the duration of state transitions. Previous work has been done to slightly generalize Petri nets via timed Petri nets [15]. In a timed Petri net, transition firings are atomic, but tokens take time to become available once in their new places. This is plausibly useful when implementing Petri-net-based frameworks to capture the actual time taken to process tokens being computed in firing transitions. There have been similar graph based computational languages, the most notable of them being the Data Flow Procedure Language [7]. In this language, Dennis outlines a language where every fundamental arithmetic operation and conditional branch is represented by nodes in a graph connected by directed edges and are parallelizable. While this language is complete and can result in more compact and more parallelizable complete descriptions of a computation, it has four major draw backs: synchronization, complexity, performance, and integration. The synchronization aspect of static dataflows as Dennis defined them have an implicit limit of one token per arc and all tokens have the same size. Culler expands on this by considering each token a pointer to a larger value in storage [6]. However, this brings up the issue of needing to either define synchronization mechanisms in a dataflow to ensure the data in storage is not written to or read in the wrong order, or make all values pointed to by tokens copy-on-write. Culler also points out that there is no way in hardware to guarantee that there will always be one token per arc with the proposed hardware or software architecture, and proposes a change to add an acknowledgement arc to each data flow arc, which would increase the number of tokens exchanged by 1.5 to 2 times in a given dataflow. The language is highly complex and has two types of fundamental tokens which are signals and data. The language has many types of nodes to handle all possible fundamental interactions between these types of tokens. This is in stark contrast to Petri nets and NT-Petri nets, which we propose to simply describe the system of computation to be executed in parallel. A high level language named Id was compiled into dataflow graphs and was executed on some novel CPUs [2]. Performance will suffer if serialization is never exploited. If this language were to be executed in parallel on a multicore von Neumann computer, it would likely be much slower than the same computation evaluated by a completely serial program due to the incurred overhead of scheduling every operation through the operating system. A non-traditional CPU architecture has been proposed [8] which would work specifically on data flow graphs, but it requires two additional large blocks of circuitry: a distribution network and an arbitration network, which would do the work of scheduling these parallelizable operations in hardware. There have been modifications to von Neumann CPUs [10] to perform the execution and scheduling of data flow graphs on hardware. However, CPU design is a complex field, and it remains to be seen if these really have significant performance benefits over a multicore von Neumann machine which runs several serial programs that trade data and synchronize only on occasion. A completely parallelizable language is also not very integrable with the millions of existing computer systems with CPUs that are optimized for a small number of rapidly executed, mostly serial programs. Moreover, there do not exist large, well tested libraries written in this language for a potential system designer to leverage in building the computer program of interest. In [3], Petri nets are used to compare two different concurrency models: partial ordering and interleaved atomic instructions. However, in our NT-Petri net implementation, a combination is used where multiple work clusters are run in separate threads of execution. The work clusters execute regions of NT-Petri nets in a partial order. The work clusters are executed in separate threads, which may have interleaved execution according to the operating system scheduler. III. NONDETERMINISTIC TRANSITIONING PETRI NETS Deterministic programs have been the focus of previous work. However, now systems are sufficiently complex that they are designed to be nondeterministic. Nondeterministic programs have also been enabled by advances in programming languages that make it fast and easy to develop. For example, merely loading a webpage is now done nondeterministically, which greatly improves performance. However, when nondeterministic programs fail, they fail spectacularly and are much harder to fix and debug. We propose modelling nondeterministic programs with Petri nets. This novel idea allows the modeler to control and partition nondeterminism as we explain below. Specifically, we will work with a more general version of colored Petri nets where transition nodes take tokens from a subset of its incoming place nodes and produce nondeterministic colored tokens to a nondeterministic subset of its outgoing place nodes. We will abbreviate it NT-Petri Nets. A. Analysis of NT-Petri nets Given a transition node, T , in a NT-Petri net and a state matrix s which holds the counts of each colored token at each place node, T shall describe the conditions under which it is enabled with a Boolean valued enable function E T (s). E T (s) shall only be dependent on the input place nodes of T . Let π T (s) be the set of possible changes to s performed by T . The set of next possible states is {s+δ|δ ∈ π T (s)}. In keeping with the definition of Petri nets, no place nodes in a state can have fewer than 0 tokens. The state graph of a NT-Petri net start state pair can be found by recursively computing subsequent states allowed by enabled transition nodes, see Algorithm 1. Proposition 1: If a Petri net's state graph has a cycle, the Petri net must have a cycle. However, if a Petri net has a cycle, the state graph may not have a cycle. This is since if there were no loops, tokens in all places could not be replenished and previous states cannot be achieved from subsequent states. If a Petri net has a loop, a generated state graph from the Petri net may or may not have a loop depending on the start state, since there may exist a loop in the Petri net where tokens can never reach and therefore the corresponding state graph would have no loop. The bound for the size of the state graph of a NT-Petri net with up to t tokens, all unique, and p place nodes is |State Graph| ≤ (p + 1) t . Each token has p + 1 choices (any place and the option not to be placed), and since there are t tokens, we get (p + 1) t bound for the number of states. The number of states may be smaller depending on the connectivity and rules of transitions in the NT-Petri net, as a graph. IV. CONCURRENT PROGRAMS Concurrent programs are useful for applications running on hardware capable of parallel execution, applications that use many pieces of hardware, or applications that have very large memory footprints. Concurrent programs work by having many threads that can be interleaved and executed in any way by an operating system [13]. A thread is a sequence of instructions to be executed in order. A program determines its threads and each thread's sequence of instructions. A singe threaded program contains one thread while a concurrent program contains multiple threads. Threads depend on resources. A resource is something that the thread uses to accomplish its objective, for example a hard drive, webcam, or CPU time. If multiple threads do not share resources correctly, then an unrecoverable state can be reached. For example, if one thread frees a region of memory while another thread is using that region of memory, the program will be killed by the operating system. Petri nets are most useful in building and modeling computer programs, not as languages in themselves, but as an API in a standard language. The Petri net API should have non-blocking subroutines passed into each transition node and executed according to the state of the Petri net and arbitrary constraints may be placed onto the state graph prior to execution to determine if the state graph of the constructed Petri net program is valid as defined by the use case of the application for which the Petri net program is built. V. CONCURRENT EXECUTION OF PETRI NETS Petri nets are useful for writing concurrent programs. We let each token represent a resource and transitions represent functions on those resources. If two transition nodes of the Petri net do not share any resource dependency and are both enabled, then they can be safely executed concurrently (in any order or at the same time). We will partition the transition nodes, denote N t , where every pair of partitions do not share incoming place nodes. Then each partition gets its own thread called a work cluster. If this is not done, a race condition could occur. For example, if two work clusters are blocked concurrently on intersecting sets of places, once tokens are deposited in these place nodes, and more than one transition node is enabled in more than one work cluster, then a race between threads occurs and an invalid state that is not on the state graph could be reached. Consider an invalid NT-Petri net example, see Figure 7. If both W orkCluster0 and W orkCluster1 are blocked on P 1 , then the receiver of the token at P 1 is nondeterministic. If the token goes to T 0 , then the Petri net can continue its execution to the next state, but if the receiver of the data is T 1 , then the Petri net will deadlock prematurely as T 1 cannot execute due to no token in P 2 , and T 0 cannot execute as there is no token in P 1 . Since the transitions T 1 and T 1 have an intersection of input places, they should be placed in the same work cluster, see Figure 8. Disjoint work clusters may be unioned and the execution of the NT-Petri net will still be valid. In practice, some Petri nets may execute faster if partitioned into a small number of work clusters due to thread handling overhead present in all operating systems. Maximal concurrency of a NT-Petri net program may be computed by partitioning the transition nodes into the largest number of valid work clusters. When finite, there exists a unique maximal work cluster configuration because transition nodes cannot be shifted between work clusters with the work clusters still being valid. This is a type of graph partitioning problem. We want to optimize over all graph partitions minimizing an objective that we set to infinity at invalid partitions. Consider partitions, P , of N t , optimize arg min P : ∪iPi=Nt, Pi∈P − count(P ) + invalid(P ) × ∞. There are many methods for solving a discrete optimization problem such as this as well as many software packages and libraries [16]. This leads to the conclusion that a NT-Petri net of highly connected place nodes will require a more synchronous execution than a NT-Petri net with transitions that have fewer input intersections. VI. NT-PETRI NET SOFTWARE FRAMEWORK We developed software to implement the above ideas. The software framework is open source and available on Github. 2 This software framework aims to facilitate the construction and execution of software systems modelled as colored NT-Petri nets. The framework supports transition nodes, place nodes, and typed tokens. Each transition node is defined by a set of functions that map defined input firing conditions to possible output tokens. The above theory uses colors but in practice we switch to classes or types in the programmatic sense. A firing condition is defined by requiring a certain number of tokens of a certain type contained at certain input places. A function in a transition node takes a certain number of tokens of given types and outputs certain tokens into output place nodes. Tokens are implemented as unique pointers to dynamically typed objects 2 https://github.com/MarshallRawson/nt-petri-net located on the heap, so moving tokens is efficient and safe. Places are implemented as a set of FIFO queues, with one for each possible type of token it can be given. Transitions are implemented as structs [9] with several registered member functions. Each registered member function is assigned one or more input token sets and one or more output token sets. Input and output token sets are implemented as structs. Up to one of the registered member functions will be called at a time depending on the input tokens available. All the parsing of sets of tokens is done in the library itself. It is a purposeful design decision to not expose the list of tokens to the developer in order to minimize parsing errors and maximally leverage the compile-time checks on the developer's code. The front facing transition API is designed to maximize the compile-time guarantees. The resulting NT-Petri net can be executed by as few as one thread or as many threads as there are valid work clusters. Consider an example of a transition with two input places, X and Y , and two output places, Z and Q. This transition will try to generate a score from the input tokens, but the transition can fire when there is only one token in either X or Y , resulting in a token deposited to either Z or Q (depending on internal mechanisms), or if there are tokens in X and Y , calculate a score with a different function and place to result into Q. Since which score gets calculated in determined by the arrival times of the tokens in X and Y , the transitions change to the state graph is nondeterministic. This NT-Petri net is described by the source code in 9 and visualized in 10. VII. SMART PAN TILT ZOOM CAMERA USE CASE Consider a realistic pan tilt zoom camera and microphone device to allow remote participants to participate in group discussions by providing audio and video of only those speaking with the remote participants. This is an example of a processing pipeline with feedback. The feedback in this processing pipeline is the pan, tilt, and zoom requests to point the camera at the current speaker. Typically, a processing pipeline like this with feedback is very difficult to model and show concurrency safety. However, a NT-Petri net provides a concise model of the desired concurrent solution. We visualize and then can easily reason about the NT-Petri net, see Figure 11. We will describe the transitions of this NT-Petri net. For each transition T 1 through T 5 , when all input places have tokens, the transition is enabled. When firing, a token is removed from each input place and a token is put in each output place. The T 0 transition has a special rule determining its firing conditions and resulting set of tokens from each condition. This means that the transition can fire if there is a token on P 10 or P 1 . In the case of the former firing condition, a token will be taken P 10 and put on P 9 . In the case of the latter firing condition, a token will be taken P 1 and put on P 0 . In this solution, T 0 , T 1 , T 2 , T 4 are run concurrently, but only advance at the pace of the slowest processing step. T 3 has a looser relationship with the processing pipeline, where up to five pan tilt zoom requests (tokens) may be made and not yet let n = Net::make() .place to transition("X", "a", "score") .place to transition("Y", "b", "score") .transition to place("score", "c", "Z") .transition to place("score", "d", "Q") .add transition("score", score::Score::maker()); Reactor::make(n).run(); } Fig. 9: Example NT-Petri net implementation using our software framework written in Rust. fulfilled before execution of the pipeline will possibly wait. While this program can still temporarily block, it also has a finite state graph and therefore can be proven to have correct behavior and no deadlocks. This shows us that the number of start tokens in certain places can describe the synchronization requirements between certain computations in a concurrent program. VIII. CONCLUSION Modern electronics and computer systems rely on nondeterministic concurrent programs to multitask, for example running a screen and internet connection simultaneously. However, concurrent models typically allow for a large number of possible states of the system, most of which are erroneous. These erroneous states are what cause failures and crashes that frustrate technology users. Using our framework, we model programs and systems as a Petri nets. The state graph of the system can then be computed and checked for invalid states before deploying the system. A generalization of Petri nets with nondeterministic transition nodes makes the programs more concise, readable, and able to match hardware interfaces whilst still maintaining the ability to optimize the number of threads and solve for the state graph. A NT-Petri net program can be optimized by solving for the maximal number of useful threads in its execution since resource requirements of each computation are explicitly stated before execution by the structure of the Petri net. We describe and give a coded example of our open source software framework that implements these ideas. This type of nondeterministic concurrency modelling could be a useful tool for system designers everywhere. IX. FUTURE WORK The two main areas with the largest potential for immediate benefit for systems built as NT-Petri nets are timing introspection and state graph provability. A system built as a NT-Petri net can record and plot timing of each transition computation vs time wasted in the middleware or operating system. The system designer can use this information to make informed design decisions to change how the transitions are connected or change the arrangement of work clusters to increase throughput and decrease latency. A system built as a NT-Petri net has a state graph that can be computed and verified before execution of the program. However, with large projects, the state graph can become too large. To remedy this problem, the NT-Petri net can be decomposed into components. Then a component's state graphs can be computed and linked together to greatly reduce the size of the state graph as described in [5]. Once a system has both representative timing records and a state graph, timing analysis can be conducted without needing to run the program at all. This eliminates the typical resource and time bottleneck of running and testing systems in the real world. This is extremely useful to system designers attempting to optimize their systems via node rearrangements or collect evidence for timing goals and requirements.
2022-08-08T01:15:06.971Z
2022-08-04T00:00:00.000
{ "year": 2022, "sha1": "eca7b431bd8acc1a444328056b5aace5e055528b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "81634eb53328451b6ecb2cdf8450b67a140c4b7f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248530190
pes2o/s2orc
v3-fos-license
Diet and Host Genetics Drive the Bacterial and Fungal Intestinal Metatranscriptome of Gilthead Sea Bream The gut microbiota is now recognised as a key target for improving aquaculture profit and sustainability, but we still lack insights into the activity of microbes in fish mucosal surfaces. In the present study, a metatranscriptomic approach was used to reveal the expression of gut microbial genes in the farmed gilthead sea bream. Archaeal and viral transcripts were a minority but, interestingly and contrary to rRNA amplicon-based studies, fungal transcripts were as abundant as bacterial ones, and increased in fish fed a plant-enriched diet. This dietary intervention also drove a differential metatranscriptome in fish selected for fast and slow growth. Such differential response reinforced the results of previously inferred metabolic pathways, enlarging, at the same time, the catalogue of microbial functions in the intestine. Accordingly, vitamin and amino acid metabolism, and rhythmic and symbiotic processes were mostly shaped by bacteria, whereas fungi were more specifically configuring the host immune, digestive, or endocrine processes. INTRODUCTION The green transition of food production systems is the main issue to address the increase of production volumes. Thus, the target is to decrease the environmental footprint, while complying with high food safety and quality standards (Béné et al., 2015;FAO, 2016). However, climate change will impact most food production systems, and aquaculture, in particular, both directly affecting the physical condition and physiology of stocked animals, and indirectly altering fishmeal and fish oil costs, as well as other goods and services (Bohnes et al., 2019;Garrett et al., 2021). To overcome some of these constraints, continuous efforts have been conducted over the last two decades for the successful replacement of marine feed ingredients with more sustainable feedstuffs in fish feeds (Benedito-Palos et al., 2007;Lazzarotto et al., 2018;Perera et al., 2019;Egerton et al., 2020). The most obvious alternatives are plant proteins and oils, and, interestingly, most drawbacks effects -including pro-inflammatory condition, loss of epithelium integrity, and advanced male-female sex reversal in a protandrous hermaphrodite fish such as gilthead seabream (Sparus aurata), -can be restored, at least partially, with the use of feed additives (Estensoro et al., 2016;Piazzon et al., 2017;Simó-Mirabet et al., 2018;Reis et al., 2021). In addition, a range of novel feed formulations, including emerging feed ingredients (i.e., insects, seaweeds, microalgae, microbial biomasses, land-animal processed animal proteins, fish meal, and oil from fisheries and aquaculture by-products), do not affect fish welfare and guarantee good zootechnical performance in gilthead sea bream (Basto et al., 2021;Naya-Català et al., 2021a;Solé-Jiménez et al., 2021). Therefore, it is possible to produce marine seafood products using formulation concepts and ingredients that fit into a circular economy framework. Genetic improvements in feed conversion ratio (FCR) also contribute to moving more toward a more environmentallysustainable aquaculture sector (Kause et al., 2016;Vandeputte et al., 2019), though FCR is a problematic trait to be included in aquaculture breeding programmes due to the difficulties of accurate measurements of individual feed intake in the aquatic milieu (De Verdal et al., 2018;Besson et al., 2019). Thus, in gilthead sea bream, most genetic selection programmes have been applied for the direct selection and improvement of somatic growth, disease resistance, or carcass quality (Perera et al., 2019(Perera et al., , 2021bLorenzo-Felipe et al., 2021). However, there is now evidence that selection for growth in a farming environment co-selects changes in reproductive or swimming performance (Ferosekhan et al., 2021;Perera et al., 2021a). In addition, we have recently reported that selection for growth also selects for a more functionally flexible microbiota when the inferred gut metagenomes of representative fish families with different growth trajectories across the production cycle were compared (Piazzon et al., 2020). However, such an approach was based on the amplification of specific variable regions (v3-v4) of the 16S rRNA gene, and they only inform about the taxonomic profile of one portion (Bacteria, Archaea) of the whole gut microbial community, which also includes Fungi and Virus (Merrifield and Rodiles, 2015). In addition, the inferred functional changes related to gut bacteria variations might not correlate with the actual expression profile of these populations. To solve these issues, metatranscriptomic analyses are perhaps a better approach (Aguiar-Pulido et al., 2016), evidenced by the exponential increase of metatranscriptomic projects over the last 20 years (Shakya et al., 2019). Such approach has been used in humans to characterise active microbes in a community, discover novel microbial interactions, and track the relationship between viral genes and their hosts (Gosalbes et al., 2011;Bikel et al., 2015;Bashiardes et al., 2016;Moniruzzaman et al., 2017). In livestock species, metatranscriptomics has helped to reveal the association between breed effect and rumen microbiome activity (Li et al., 2019a). In aquatic organisms, although at a lower extent, there are also some examples analysing the composition of marine fish viromes (Geoghegan et al, 2021), to characterize the full set of water-living microbes (Salazar et al., 2019;Trench-Fiol and Fink, 2020), and to reveal microbial functions associated with the digestion of algal polysaccharides in the digestive tract of the abalone Haliotis discus hannai (Nam et al., 2018). Yet, to date, there is no information on the genome × gut metatranscriptome interaction of genetically selected fish and how this can affect their nutritional plasticity. This is, thereby, the aim of the present study, where we sequenced the intestinal metatranscriptome of two gilthead sea bream families with opposite growth trajectories from the study of Piazzon et al. (2020) with three main objectives: i) to study and characterize the full set of microbes present in the FIGURE 1 | Workflow of the gilthead sea bream metatranscriptome assembly conducted in this study. Black boxes with white text indicate generated genomic resources, according to the following steps: experimental procedures and sequencing, metatranscriptome assembly, and post-assembly analyses (taxonomic assignation, functional annotation, and GO and KEGG over-representation tests). gut of this fish species, ii) to evaluate if genetic background and alternative diets can change the expression of genes collectively expressed by these microbial communities, and iii) to unravel which metabolic processes are enriched in gilthead sea bream intestine and which microbial community is involved, to provide more insights into the broad microbial landscape of the gut of farmed fish. Samples Two gilthead families from the PROGENSA R breeding programme (Perera et al., 2019(Perera et al., , 2021b were selected for metatranscriptomic analysis according to their different growth trajectories in a highly controlled flow-through system (Perera et al., 2019): fast-growth (family e6e2) and slow-growth (family c4c3). These animals, fed a control (D1) or a well-balanced plant-based diet (D2) for 9 months, were kept in eight 3,000 L tanks under a common garden system to eliminate confounding environmental effects. The total RNA from the intestinal mucus of the anterior portion of the gut was extracted from eight fish of each family and diet. Then, a total of 16 pooled samples (two fish of the same diet, family, and tank, replicate per sample) were sequenced, at the rate of four samples per experimental group. More details on the fish rearing and sampling can be found in Figure 1 and the Materials and Methods section. Sequencing and Metatranscriptome Assembly Ribo-depletion and subsequent Illumina paired-end (PE) sequencing of the 16 pooled RNA samples yielded a total of 766 M reads (∼48 M reads per sample) (Supplementary Table 1). After trimming, quality filtering, and a second in silico ribosomal RNA removal step, around 3% of all reads were discarded, and the remaining reads ranged between 25 M (7.5 Gb) and 166 M (49.8 Gb) within the experimental groups. Pre-processed reads were then assembled and 358,784 unigenes (i.e., non-redundant transcripts) were identified ( Table 1). Mapping of the cleaned reads (∼75%) resulted in all the unigenes being overlapped by, at least, one sequence. The unigenes alignment with bacterial, fungal, archaeal, and viral sequences extracted from the NCBI's NR database to obtain the repertoire of genes expressed by microbial communities in the gut of gilthead sea bream, resulted in a total of 35,144 transcripts, which corresponded to ∼10% of the total assembled RNA transcripts (Table 1). These transcripts corresponded to 17,618 unique descriptions with a low proportion of hypothetical (1,813; 5.2%) and uncharacterised/unnamed (267; 0.8%) proteins. Taxonomic Composition of Gilthead Sea Bream Metatranscriptome All 35,144 annotated unigenes of gilthead sea bream metatranscriptome were classified, at least, to one of the four targetted taxonomic kingdoms: bacteria, archaea, eukarya (for Fungi-related unigenes), and virus. Considering this assignation and the normalised gene expression level of the The size of the dots represents the percentage of genes in this category, which were retrieved with our assembly strategy in each group. The colour scale represents the -log 10 Padj value obtained in the over-representation test of each pathway within each taxonomy. annotated unigenes, the relative expression of each taxonomy in all the samples was calculated. The results of this procedure showed that Fungi and Bacteria were the most active populations in our samples, representing 51.43 and 43.67% of the total gut microbial expression in our species, respectively. At a lower extent, genes belonging to Virus (3.25%) and Archaea (1.65%) populations were also expressed (Figure 2A). The functional annotation of the taxonomic groups resulted in a diverse set of functional biological process categories (GO-BP) allocated to 23,706 annotated unigenes (11,284 unique descriptions). Then, an over-representation analysis inferred the terms associated with the intestinal microbial taxonomies, and 437 GO-BP were considered significantly over-represented (Fisher test; FDR < 0.05) among the different groups. To have an overview of the functionality of each microbial population in the gut, these over-represented terms were clustered in 20 level 2 GO-BP categories, present, at least, in one of the groups ( Figure 2B). The genes expressed by the predominant bacterial and fungal populations disclose involvement in practically all the processes. However, the rhythmic process was concomitant with bacteria, whereas the immune system process was exclusively associated with bacterial and fungal communities. In addition, all taxonomic groups had an over-representation of routes related to interspecies interaction between organisms, multi-organism process, response to stimulus, and signalling terms, among others. Diet and Family Effects on Metatranscriptome Composition To test if the genetic background or the diet influenced the expression of transcripts of the different microbial communities, changes in their relative expression abundances were assessed. Not considering the diet, the family variable showed no statistically significant effect on the metatranscriptome composition, as well as the interaction between family and diet variables. However, statistical differences (Two-way ANOVA, P < 0.05) were detected when the variable diet was studied independently ( Table 2). Specifically, a trade-off between the relative abundance of bacteria (decreased from 45.18% in fish-fed D1 to 42.16% in fish-fed D2) and fungal (increased from 50.1% in fish fed D1 to 52.76% in fish fed D2) transcripts were found. To further evaluate these differences in the microbial expression among the groups, a partial least squares discriminant analysis (PLS-DA), comprising the 35,144 annotated unigenes, was performed. The discriminant model was based on five components, which explained 99% [R2Y(cum)] and predicted 75% [Q2Y(cum)] of the total variance ( Figure 3A). During the statistical processing to construct the model, one fish from the c4c3-D1 group, which coincided with the sample with the lowest number of sequenced reads, appeared as an outlier (Hotelling's T 2 > 0.99) and was excluded from the model. The fit of the resulting PLS-DA model was validated by a 500random permutation test (Supplementary Figure 2A). The final model separated the c4c3 family from the e6e2 fish in the first component (∼41% explained variance), whereas the second component mainly separated the e6e2-D1 fish from the other two groups (∼41% explained variance). These results showed how D2 was significantly changing the metatranscriptomic profile in fast-growth families, but no differences were detected in slow-growth families when alternative diets were used. Similar results were found when different PLS-DA models were inferred using the annotated unigenes exclusively assigned to Bacteria (Supplementary Figures 2B,C Figures 2H,I) groups. Likewise, a subsequent hierarchical clustering, using the FPKM expression values of the 5,998 genes driving the separation among groups (VIP ≥ 1), was not able to separate the samples from the c4c3 family fed both diets whereas the fish from e6e2-D1 and e6e2-D2 groups were assigned to different clusters ( Figure 3B). Cluster analysis using the 5,998 differentially expressed genes, identified four gene clusters according to the expression levels in the different groups (optimal Elbow number = 4; Supplementary Figure 3): C1, 1,301 genes up-regulated in e6e2-D1 with lower expression values in e6e2-D2 and c4c3; C2, 1,007 genes downregulated in e6e2 fish in comparison to c4c3 fish; C3, 1,502 genes down-regulated in e6e2-D1 with higher expression values in e6e2-D2 and c4c3; and C4, 2,188 genes up-regulated by the alternative diet (D2) in e6e2 family. Genes allocated to each group were used for further over-representation analysis. The functional annotation for the 5,998 genes overcoming the VIP threshold can be accessed in Supplementary Table 2. Functional Gene Ontology and Kyoto Encyclopaedia of Genes and Genomes Pathways Over-Representation Tests To study the functionality of the genes involved in each clustered group, an over-representation test (Fisher test, FDR < 0.05) was performed. This procedure displayed 340 GO-BP and 236 Kyoto encyclopaedia of genes and genomes (KEGG) unique terms which were over-represented in, at least, one of the groups Values are the mean ± SEM of four pooled samples (eight fish in total) per group. The P-values are the result of two-way analysis of variance. Bold numbers with an asterisk (*) represent significant P values (< 0.05). Frontiers in Microbiology | www.frontiersin.org (Supplementary Figure 4A). Also, Venn diagrams showed a high degree of overlapping of terms between categories (Supplementary Figures 4B,C). To filter the list of overrepresented terms avoiding intersections, only those unique terms for each group were retrieved and used for further analysis, with a total of 99 and 90 category-specific GO-BP and KEGG terms, respectively. The highest number of specific enriched GO-BP terms was found in C4 (54), followed by C3 (29), C1 (12), and C2 (4). In the case of enriched KEGG terms, 33 routes were found in C1, followed by C4 (25), C2 (18), and C3 (14). The entire list of over-represented terms in each group, and their respective list of associated genes can be accessed in Supplementary Tables 3, 4. A total of 88 (89%) category-specific enriched GO-BP terms were found to be grouped according to their allocated shared genes. Thus, the over-represented terms associated with C1 and C3 groups were clustered in 6 supra-categories ( Figure 4A). C1 and C3 were explored together as they present the same trend, similar expression in e6e2-D2 and c4c3, with significant differences only in e6e2-D1. The highest number of genes was present in the supra-category Regulation of cellular component biogenesis (20 microbial genes allocated to 10 GO-BP over-represented terms), followed by Negative regulation of cell communication and signalling (6 genes to 4 GO-BP), Immune response and angiogenesis (5 genes to 9 GO-BP), Lipid storage (5 genes to 2 GO-BP), Cell cycle phase (2 genes to 5 GO-BP), and Multi-organism reproductive behaviour (2 genes to 3 GO-BP). Only two GO-BP terms were connected in C2, under a supra-category named Organic hydroxyl compound metabolic process, which encompassed five microbial genes ( Figure 4B). In the case of C4, a total of 10 supra-categories were found. Interestingly, two of them were closely related to symbiotic processes: Modulation by symbiont of host cellular process (28 genes to 6 GO-BP), and Host-mediated regulation of intestinal microbiota composition (7 genes to 1 GO-BP) ( Figure 4C). The rest of the C4 supra-categories were named Cellular response to external stimulus (35 genes to 11 GO-BP), Regulation of cell division and metabolic process (34 genes to 12 GO-BP), Regulation of anatomical structure morphogenesis (24 genes to 4 GO-BP), Killing of cells of another organism (17 genes to 7 GO-BP), Cilium or flagellum-dependent cell motility (16 genes to 2 GO-BP), Sporulation (9 genes to 2 GO-BP), Regulation of appetite (5 genes to 4 GO-BP), and Regulation of catalytic activity (5 genes to 2 GO-BP). The list of category-specific enriched KEGG terms was also clustered and nine supra-categories, containing 47 pathways (∼51%), were found. The groups C1 and C3 exclusively comprised the supra-categories Fatty acid metabolism (46 microbial genes allocated to 10 KEGG enriched terms), Energy metabolism in prokaryotes and carbohydrate metabolism (35 genes to 6 KEGG), Proteasome (34 genes to 1 KEGG), and Xenobiotics degradation (5 genes to 3 KEGG) ( Figure 5A). Alkaloid biosynthesis (9 genes to 2 KEGG) and Vitamin biosynthesis and metabolism (4 genes to 2 KEGG) were limited to C2, whereas the Organismal system (26 genes to 7 KEGG) supracategory was restricted to C4 (Figures 5B,C). Terms related with Infectious diseases and Immune system signalling pathways (33 genes to 11 KEGG) were shared between C1-C3 and C2 group and the Amino acid metabolism's (22 genes to 5 KEGG) supra-category was present in all groups. Taxonomic Composition of Enriched Supra-Categories The list of category-specific, clustered, and enriched GO-BP (88) and KEGG (47) terms were allocated to 167 and 200 microbial transcripts, respectively. Among the GO-BP terms, a total of 73 genes (43.7%) were assigned to Bacteria, followed by Fungi (69; 41.3%), viruses (24; 14.4%), and Archaea (1; 0.6%). Fungal genes were predominant (>60%) in the supra-categories of Organic hydroxyl compound metabolic process, Regulation of catalytic activity, Regulation of appetite, and Immune response FIGURE 4 | Network layout representing the associations between over-represented GO-BP terms according to their shared allocated genes in panels (A) C1 + C3, (B) C2, and (C) C4. Node size represents the number of genes allocated to a specific GO-BP term. Node colours show the representative name of clustered GO-BP terms. Edge width represents the number of shared genes between two GO-BP terms. and angiogenesis (Figure 6A), whereas bacterial genes were more evident in the supra-categories of Modulation by symbiont of host cellular process, Sporulation, Host-mediated regulation of intestinal microbiota composition, Killing of cells of another organisms, and Lipid storage. On the other hand, a considerable amount of the genes allocated to enriched and clustered KEGG FIGURE 5 | Network layout representing the associations between over-represented KEGG terms according to their shared allocated genes in panels (A) C1 + C3, (B) C2, and (C) C4. Node size represents the number of genes allocated to a specific GO-BP term. Node colours show the representative name of clustered KEGG terms. Edge width represents the number of shared genes between two KEGG terms. categories (140; 70%) were assigned to fungi, followed by bacteria (52; 26%), viruses (7; 3.5%), and Archaea (1;0.5%). The supra-category Proteasome and the categories associated with organismal systems (>60% of genes) were predominantly composed of fungal genes, as well as Alkaloid biosynthesis, Infectious diseases and immune system signalling pathways ( Figure 6B). On the contrary, the Amino acid and Vitamin biosynthesis and metabolism supra-categories seem to be mainly directed by bacterial genes. DISCUSSION The gut microbiomes of fish are complex networks of communities, including members of bacteria, archaea, fungi, and viruses (Merrifield and Rodiles, 2015). However, it is often estimated that the abundance of bacterial taxa (>99%) outnumbers the proportion of the other microbial populations (<1%) (Merrifield and Rodiles, 2015;Egerton et al., 2018). In this scenario, amplicon-based sequencing techniques targetting the 16S rRNA bacterial gene have been helpful approaches, widely used to measure the composition and alterations of fish gut bacterial communities. However, these techniques do not offer the complete intestinal metagenome landscape, and microbial gene repertoire and expression cannot be retrieved or measured (Lindahl et al., 2013;Rausch et al., 2019). To overcome these limitations, this study reports a metatranscriptomic approach, showing that genes belonging to the Archaea, Bacteria, Eukarya (Fungi), and Virus domains are metabolically active in the gilthead seabream (Figure 2A). As expected, archaeal and viral transcripts were found in lower proportions (1.65 and 3.25%) in the intestinal mucosa, but bacterial and fungal transcripts were roughly equal (∼44 and ∼51%, respectively) and highly predominant in this mucosal surface. Indeed, several studies highlighted the importance of the highly diverse fungal microbiota fraction in humans (Huffnagle and Noverr, 2013;Santus et al., 2021). In the same line, more than 20% of the metatranscripts in the rumen of dairy cows have been related to fungi (Comtet-Marre et al., 2017), and untargeted metabolomics underlined the presence of fungal-derived metabolites in the serum of gilthead sea bream fed plant-based diets (Gil-Solsona et al., 2019). Certainly, the eukaryotic fungal cells, although lower in number, might become more transcriptionally active than prokaryotic cells. As shown in Figure 2B, the functional annotation of the gilthead seabream metatranscriptome shared the overrepresentation of a significant number of GO-BP categories related to symbiosis (Interspecies interaction between organisms, Multi-organism process, Multicellular organismal process) and sensory responses (Behaviour, Response to stimulus, and Signalling), evidencing the contribution of all the microbial communities to the cooperative processes taking place in the host gut. The predominant taxa, Fungi and Bacteria, expressed genes related with all the GO-BP level 2 categories, but the Rhythmic process appeared as a bacterial-specific category. In this line, the manipulation of daily rhythms of gut bacterial microbiota abundance and activity is becoming a promising chrononutritional approach to consolidate host circadian rhythms and metabolic homeorhesis (Parkar et al., 2019;Gutierrez Lopez et al., 2021). Recently, Calduch-Giner et al. (2022) reviewed behavioural biosensing approaches based on accelerometer technology [AEFishBIT dataloggers (Rosell-Moll et al., 2021)] for informing on fish social behaviour in terms of coping styles or changes in daily or seasonal activity, linking ventilation rates with changes in energy partitioning between growth and physical activity. However, the association of changes in behaviour and gut microbiome rhythms remains almost unexplored in fish, and their inter-related study would contribute to discerning the disrupting effects of life stressors in the gut processes related to host rhythmicity. In any case, it is noticeable that the plant-enriched diet yielded a gilthead seabream metatranscriptome with a significant decrease of bacterial transcripts (∼7%), together with an increase (4%) in the number of fungal transcripts (Table 2) as described before in rainbow trout fed with yeast and soybean meal diets (Merrifield et al., 2009;Huyben et al., 2018). In most cases, taxonomic assignment at lower taxonomic levels using the transcript sequences was not possible. Nonetheless, although commonly reported taxonomic assignments in amplicon-based protocols, such as families or genus, could not be related to their corresponding gene repertoire, the information shown in Supplementary Figure 1 can provide a hint of the most metabolically active phyla. Archaeal transcripts were majorly assigned to the Euryarchaeota phylum (Supplementary Figure 1A), one of the most discussed in humans for positively impacting gut health (Horz and Conrads, 2010). In line with their highest abundance in bacterial gut microbiome studies using 16S rRNA (Piazzon et al., 2017Naya-Català et al., 2021b;Solé-Jiménez et al., 2021), the phyla Proteobacteria, Firmicutes, Actinobacteria, and Bacteroidetes were also the ones contributing with the most expressed bacterial transcripts when the axonomic assignment was possible (Supplementary Figure 1B). Ascomycota and Basidiomycota play a pivotal role in the expression of enzymes related to fish nutrition and intestinal maturation (Gatesoupe, 2007;Banerjee and Ghosh, 2014;Siriyappagouder et al., 2018), were the most abundant phyla among the fungal fraction (Supplementary Figure 1C). By last, within the viral fraction, Herpesviridiae, Poxviridiae, and Retroviridiae families appeared to be the transcriptionally prevailing and functional (Supplementary Figure 1D), as previously detected using wild and farmed gilthead sea bream (Filipa- Silva et al., 2020). From a recent study (Piazzon et al., 2020), we concluded that gilthead sea bream families selected for fast-growth harboured a plastic bacterial microbiota that was able to adapt to diet changes with no impact on growth or health. Indeed, small changes in bacterial composition accounted for larger changes in metabolic capacity when the inferred metagenome and pathway analysis were conducted (59 metabolic pathways changing). Conversely, significant changes in intestinal bacterial composition were limited to changes in 15 metabolic pathways in fish families selected for slow-growth, assuming that all bacteria detected are metabolically active and expressing all their genes at a fixed level (Langille et al., 2013;Aßhauer et al., 2015;Louca et al., 2018). In line with these results, discriminant analysis of the metatranscriptome only showed a clear discriminant value with dietary changes for the fast-growth family ( Figure 3A and Supplementary Figure 2). To measure the consistency of these results, the 5,998 discriminant genes were clustered in four groups (C1, C2, C3, and C4) ( Figure 3B and Supplementary Figure 4), according to their expression pattern (discussed below). Clusters C1, C3, and C4 presented a differing expression pattern between the 6e2-D1 and e6e2-D2 groups. Over-represented KEGG terms in these clusters (Supplementary Table 4) disclosed that ∼70% (42 out of 59) of significant differentially expressed pathways predicted by the inferred metagenome of our previous study (Piazzon et al., 2020) were also detected in the current study, but a total of 218 unique enriched pathways were found. Altogether, although these results support that metagenome prediction tools can help to have an overview of the direction and magnitude of the metabolic changes, but metatranscriptomic analyses provide more complete and precise information. The groups C1 and C3 encompassed 2,803 microbial genes whose expression pattern was influenced by the genetic selection for high growth and shifted toward values closer to slowgrowth families with the plant-based diet ( Figure 3B). Among the elements governing this difference in gilthead sea bream gut, we mainly found genes related to the principal metabolic routes, required in all microbial populations (Qin et al., 2010) (Figures 4A, 5A). Bacterial, fungal, and viral genes were associated with Fatty acid metabolism, the predominant supra-category in these groups. This is not surprising, as gut microbes can process lipid dietary components and perform processes not exerted by the host (Schoeler and Caesar, 2019). Within this supra-category, the predominant KEGG term was N-Glycan biosynthesis, exclusively exerted, according to our results, by fungal genes. Besides, genetic selection for growth was accompanied by a raise in the formation of these compounds, which are indigestible for the host and can be transformed into short-chain fatty acids by microbial fermentation in the gut lumen (Koropatkin et al., 2012). This pathway is enriched in C1, which suggests that the plant-based diet was downregulating this pathway as part of a healthy and complex gut homeostatic process, although it is well-known that dietary butyrate supplementation can mitigate most of the inflammatory drawback effects of plant-based diets in gilthead sea bream (Estensoro et al., 2016;Piazzon et al., 2017). Together with the fatty acid metabolism, a wide representation of carbohydrate metabolism was found related to Bacteria and Fungi. Inside this supra-category, the pathway Glutathione metabolism was remarkably above the others. Glutathione is an anti-oxidative compound, widely distributed in the gastrointestinal tract of humans and rodents (Mardinoglu et al., 2015), acting as a growth and gut health health-promoting in trout (Wang et al., 2021). Here, the genetic selection for fast-growth induced an overexpression of microbial genes related to glutathione metabolism. However, this expression was reverted to values found in the slow-growing fish with the use of the plant-based diet. The bacterial fraction of the intestinal microbiota was predominantly expressing genes related to the Amino acid metabolism process in the C1 group, but also in C2 and C4. The difference among groups resided in the type of amino acid that this community is making available in the fish, being the control of dietary protein source a strategic approach for the control of amino acid-fermenting bacterial species and their metabolic pathways, which in turn could have an impact on the metabolism of the host gut (Neis et al., 2015). By last, in C3 and C2, we also found a strong association between infectious diseases and immuneinflammatory pathways, mediated by fungal genes. This is a usual link in the gut microbiomes (Lokesh et al., 2012;Al Bander et al., 2020) and yeasts have a protective role against fish pathogens by expressing immunostimulatory substances (Li and Gatlin, 2006;Lokesh et al., 2012). According to our results, the formation of these compounds would be regulated by both the diet and genetic background. Group C2, comprising 1,007 genes, differentiated the genetic background of the samples with no diet effect, demonstrating the effect of selective breeding on active gut microbial populations (Figures 4B, 5B). In addition to the amino acid metabolism and the infectious disease and immune system supra-categories, formerly described, we found the fungalassociated Alkaloid biosynthesis supra-category. Traditionally, these bioactive compounds have been related to plants (Peng et al., 2019), but fungi, especially the Ascomycota phylum, are also able to produce them (Xu et al., 2020). The properties of these organic molecules include anti-microbial and antioxidant activity (Iranshahy et al., 2014), and their inclusion into diets have been suggested as possible alternatives to antimicrobial growth enhancers (Willems et al., 2020). In fish, these chemicals have been stated to produce anti-nutritional effects due to palatability issues when they were included in the diet of rainbow trout (Glencross et al., 2006). However, the positive properties of alkaloids cannot be underestimated, and the gut fungal population could be an interesting target at the time of exploring the microbial production of alkaloids for reducing the use of antibiotics in aquaculture production (Okocha et al., 2018). Finally, a total of 2,188 genes were assigned to C4, a group showing the genetic × diet interaction effect in the fast-growth family. The expression of genes in this group remained attenuated in the fast-growing fish fed the control diet and was upregulated in fast-growth fish fed the plantbased diet, disclosing the dual response of this fish group when dealing with alternative diets (Figures 4C, 5C). Alternative diet formulations in gilthead sea bream are prone to produce changes in the intestinal plasticity of this species (Perera et al., 2019;Naya-Català et al., 2021b;Solé-Jiménez et al., 2021). Indeed, the families selected for growth used in this study showed an increased intestinal length when fed plant-enriched diets (Perera et al., 2019). Herein, the over-representation test disclosed the link between two supra-categories related to this anatomical feature: Host-mediated regulation of intestinal microbiota composition and Regulation of anatomical structure morphogenesis. These categories, mainly composed of bacterial transcripts, highlight an important role of this population in the intestinal reshaping upon feeding plant-enriched diets to fish families selected for fast growth. The bacterial population was also expressing genes related to the coupled categories Symbiont modulation of host cellular process and Killing of cells of other organisms. A recent study stated that diet and gut microbes could jointly act as enhancers of the programmed cell death to reduce colorectal cancer (Chapkin et al., 2020), and in humans, the bacterial community is a rich source of metabolites against pathogenic fungi, via the activation of the mTOR signalling pathway (Li et al., 2019b). Herein, this association suggests the role of bacterial intestinal symbionts in the modulation of processes resulting in the death or programmed death of host or other symbiont cells. Fungal genes in C4 were involved in the regulation of appetite and several processes related to Organismal systems interaction. However, the main expressed genes inside this supra-category were kinases and mitogen-activated protein kinases families, widely analyzed in fungi (Martínez-Soto and Ruiz-Herrera, 2017), which networked together with functions of tissue development and regeneration (Dorso-ventral formation) and digestive (Bile secretion), endocrine (Aldosterone and Prolactin signalling pathways), and immune (Chemokine signalling pathway) functions. It is widely documented that gut microbial metabolites can play a pivotal role in the regulation of these functions (Tan et al., 2018;Silva et al., 2020). However, the current results just suggest that the introduction of plant-enriched diets to fast-growth families is changing the signal transduction processes in the fungal symbionts, and only further metabolomics studies could help to discern the resulting metabolites of these signalling cascades. To sum up, this metatranscriptomic approach was very useful for measuring which microbial populations are metabolically active in the anterior intestine of gilthead sea bream and revealed a wide range of processes carried out by microbes that can serve as a gene catalogue for future studies. Moreover, all the transcripts were taxonomically assigned to the level of the kingdom, so processes exerted predominantly by a specific gut community could be disclosed at this level. In this line, 18S rRNA amplification approaches measuring the composition and variations of fungal intestinal communities arise as promising targets to completely understand the processes occurring in the anterior part of this tissue in gilthead sea bream. Furthermore, despite the simplicity of the experimental model, where only two families selected for growth were used, this study helped us to corroborate the higher functional plasticity of the microbiome of fish selected for fast growth, which was able to shape a changing metatranscriptome with a more stable metagenome. Ethics Statement All procedures were approved by the Ethics and Animal Welfare Committee of IATS and CSIC. They were carried out in a registered installation facility (code ES120330001055) in accordance with the principles published in the European Animal Directive (2010/63/EU) and Spanish laws (Royal Decree RD53/2013) for the protection of animals used in scientific experiments. Experimental Setup and Sampling The growth-selected gilthead sea bream families used in this study were obtained from the Spanish selection program of gilthead sea bream (PROGENSA R ) and reared as previously described (Perera et al., 2019). Briefly, fish from families e6e2 (fast-growth) and c4c3 (slow-growth) were randomly distributed (common garden system) in eight 3,000 L tanks under a flowthrough system and natural photoperiod and temperature at the IATS facilities (Castellón, Spain: 40 • 5 N; 0 • 10 E). Fish were individually tagged in the dorsal muscle with passive integrated transponders (PIT) and mixed in equal proportions and with a similar number of family members in each tank. During 9 months, four tanks were fed a control diet (D1) and the other four a well-balanced plant-based diet (D2). The exact composition of the diets and details on fish rearing can be found elsewhere (Perera et al., 2019;Supplementary Table 5). At the end of the feeding trial (July 2018), a total of 32 (four fish, two per family, per tank) 48-h fasted fish (males) with a mean bodyweight of ∼138 g (D1), and ∼130 g (D2) were anaesthetized with 0.1 g/L of tricaine-methanesulfonate (MS-222, Sigma-Aldrich, St. Louis, MO, United States) and sacrificed by cervical section. The anterior intestine was, then, cut out, opened, and washed with phosphate-buffered saline (PBS) to remove nonadherent materials and microbes. The tissue was transferred to a clean Petri dish, and the intestinal mucus was scraped out with the blunt end of a sterile scalpel. The sampled mucus was immediately frozen in liquid nitrogen and kept at −80 • C until microbial RNA extraction. RNA Extraction, Illumina Sequencing, and Sample Quality Assessment For RNA extraction 200 µl of intestinal mucus were mixed with 500 µl of TriReagent (Invitrogen, Waltham, MA, United States) and microbes were lysed in microbial lysis tubes (Qiagen, Germantown, MD, United States) using 1 cycle of 30 s at 6 m/s in a FastPrep homogenizer (MP Biomedicals, Irvine, CA, United States). Total RNA was extracted using the MagMAX TM -96 for Microarrays Total RNA isolation kit (Life Technologies, Carlsbad, CA, United States) following the manufacturer's instructions. The quality and integrity of the isolated RNA were checked on an Agilent Bioanalyzer 2100 total RNA Nano series II chip (Agilent) with RIN (RNA Integrity Number) values varying between 8 and 10. For further procedures, a total of 16 pooled samples were used. Each pool contained an equimolar amount of RNA of two individuals of the same diet, family and tank. After quality and integrity procedures, rRNA was removed using the Illumina Ribo-Zero Plus rRNA Depletion Kit (Illumina Inc., San Diego, CA, United States), which targets both eukaryotic and prokaryotic rRNA. Then, Illumina RNA-seq libraries were prepared from 500 ng of total ribo-depleted RNA using the Illumina TruSeq TM Stranded Total RNA Library Prep Kit (Illumina Inc., San Diego, CA, United States) according to the manufacturer's instructions. All RNA-seq libraries were sequenced on an Illumina NovaSeq 6000 platform as a 2 × 250 nucleotides paired-end (PE) read format, according to the manufacturer's protocol. Raw sequenced data were deposited in the Sequence Read Archive (SRA) of the National Centre for Biotechnology Information (NCBI) under the Bioproject accession number PRJNA790012 (BioSample accession numbers: SAMN24182635-50). Quality analysis of sequencing reads was performed with FASTQC v0.11.7 (last accessed: 23 April 2020), 1 and libraries were pre-processed with Trimmomatic v0.40 (Bolger et al., 2014), removing reads with adaptor contamination, >10% of Ns in the sequence, and with a mean sequence quality < 20. To ensure the elimination of rRNA sequences, filtered reads were aligned to rRNA and tRNA databases from NCBI (Altschul et al., 1990) and SILVA databases (Quast et al., 2013). The remaining sequences were used for further steps. Bioinformatics Analysis Cleaned reads were introduced in Trinity v.2.11.0 (Grabherr et al., 2011) for the de novo transcriptome reconstruction, setting a k-mer length of 25 and minimum k-mer coverage of 2. Assembled transcripts were clustered at 95% identity threshold for redundancy removal using CD-HIT v.4.6 (Fu et al., 2012) to obtain unigenes. Alignments against the Bacteria, Fungi, Archaea, and Virus sequences extracted from the NCBI's NR database were performed with DIAMOND v.0.8.22 (Buchfink et al., 2014) using the blastx algorithm option (evalue < 10 −5 ). The same algorithm was used to compare the non-aligned fraction of transcripts against the gilthead sea bream genome (Pérez-Sánchez et al., 2019) to confirm host origin (e-value < 10 −5 ). These host sequences were not used in downstream analyses. The Lowest Common Ancestor (LCA) algorithm, implemented in the MEGAN software (Huson et al., 2018), was used to taxonomically classify the microbial-aligned sequences without losing biological significance. Since multiple alignments may occur, this algorithm assigned each unigene to the lowest node in the NCBI taxonomy that encompasses the set of NR-aligned sequences, when possible. To calculate the gene expression levels, cleaned reads were mapped against the reconstructed unigene metatranscriptome as a reference using Bowtie2 v.2.4.4 (Langmead and Salzberg, 2012). Mapping results were analysed using RSEM v1.2.15 (Li and Dewey, 2011), which 1 https://www.bioinformatics.babraham.ac.uk/projects/fastqc rendered the read count for each gene in each sample. Then, read counts were normalised into FPKM to consider the effects of both sequencing depth and gene length. Functional annotation of GO-BP and KEGG metabolic pathways was performed over the assembled and annotated unigenes model using blast2go (e-value ≤ 10 −6 ) and DIAMOND (Buchfink et al., 2014), respectively. The GO-BP terms' hierarchy was retrieved using the QuickGO API tool (last accessed: August 2021) 2 and GO-BP were clustered according to their ancestor in Gene Ontology (GO) at level 2 (i.e., immediate child of Biological process; GO:0008150). Fisher test-based over-representation tests of BP-GO and KEGG terms were implemented in the goseq R package (Young et al., 2010). In the case of enriched KEGG categories, once the over-representation test was performed, all enriched terms belonging to processes associated to human and non-related to microbial species were excluded. The relationships between enriched GO-BP and between KEGG terms according to their shared genes were performed using the runGSA function of piano R package (Väremo et al., 2013), and the resulting networks were visualised with Cytoscape v3.8.2 (Smoot et al., 2011). Statistics Effects of genetic background and diet on the relative abundance of microbial transcripts expression were analyzed by two-way ANOVA using SigmaPlot v.14.5 (Systat Software Inc.). Data was previously checked for normal distribution (Shapiro-Wilk test) and homogeneity of variances (F-test). To study the separation among the groups, supervised PLS-DA and hierarchical clustering of samples were performed using EZinfo v3.0 (Umetrics) and R package ggplot2, respectively. Values of FPKM counts of genes expressed in five or more samples were included in the analyses. The contribution of the different genes to the group separation was determined by the minimum variable importance in the projection (VIP) values achieving the complete clustering of the conditions with a VIP value ≥ 1, considered to be an adequate threshold to determine discriminant variables in the PLS-DA model (Wold et al., 2001;Li et al., 2012;Kieffer et al., 2016). Hotelling's T2 statistic (at 99% range) was calculated with the multivariate software package Ezinfo v3.0 to detect outliers in the model. The quality of the PLS-DA model was evaluated by the parameters R2Y (cum) and Q2 (cum), which indicate fit and prediction ability, respectively. To assess whether the supervised model was being over-fitted, a validation test consisting of 500 random permutations was performed using the Bioconductor R package ropls (Thévenot et al., 2015). The optimal number of categories in which the clustering of genes could be divided was determined through the Elbow method using the stats R package. For this purpose, the within-group sum of squares at each number of clusters (from 1 to 10) was calculated and graphed. The location of the bend in the plot was considered the appropriate number of nodes. Significantly enriched GO-BP and KEGG categories were obtained after FDR correction using a cut-off of 0.05. DATA AVAILABILITY STATEMENT Raw sequencing data can be found at NCBI's Sequence Read Archive under accession PRJNA790012 (BioSample accession numbers: SAMN24182635-50). ETHICS STATEMENT The animal study was reviewed and approved by Ethics and Animal Welfare Committee of IATS and CSIC. AUTHOR CONTRIBUTIONS AS-B and JP-S: conceptualisation, funding acquisition, project administration, resources, and supervision. FN-C, MCP, and JP-S: data curation, formal analysis, and writing-original draft. FN-C and MCP: visualisation. FN-C, MCP, JC-G, AS-B, and JP-S: investigation, writing-review and editing, and read and approved the final manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the Spanish Projects Bream-AquaINTECH: From Nutrition and Genetics to Sea Bream Aquaculture Intensification and Technological Innovation, RTI2018-094128-B-100, AEI/FEDER, UE; Additional funding was obtained from the European Union's Horizon 2020 Research and Innovation Programme under grant agreement no. 818367; AquaIMPACT-Genomic and nutritional innovations for genetically superior farmed fish to improve efficiency in European aquaculture. MCP was funded by a Ramón y Cajal Postdoctoral Research Fellowship [RYC2018-024049-I cofunded by the European Social Fund (ESF) and ACOND/2020 Generalitat Valenciana]. We acknowledge the support of the publication fee by the CSIC Open Access Publication Support Initiative through its Unit of Information Resources for Research (URICI).
2022-05-06T13:12:42.563Z
2022-05-06T00:00:00.000
{ "year": 2022, "sha1": "7b2c7047bad70eb9814397a3f837d2521a253f06", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "7b2c7047bad70eb9814397a3f837d2521a253f06", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
59473784
pes2o/s2orc
v3-fos-license
Soft collinear effective theory for gravity We present how to construct a Soft Collinear Effective Theory (SCET) for gravity at the leading and next-to-leading powers from the ground up. The soft graviton theorem and decoupling of collinear gravitons at the leading power are manifest from the outset in the effective symmetries of the theory. At the next-to-leading power, certain simple structures of amplitudes, which are completely obscure in Feynman diagrams of the full theory, are also revealed, which greatly simplifies calculations. The effective lagrangian is highly constrained by effectively multiple copies of diffeomorphism invariance that are inevitably present in gravity SCET due to mode separation, an essential ingredient of any SCET. Further explorations of effective theories of gravity with mode separation may shed light on lagrangian-level understandings of some of the surprising properties of gravitational scattering amplitudes. A gravity SCET with an appropriate inclusion of Glauber modes may serve as a powerful tool for studying gravitational scattering in the Regge limit. Introduction Quantum gravity is a fascinating frontier of theoretical physics as it is the only known place where two otherwise perfect theoretical structures-general relativity and quantum mechanicsdramatically clash with each other. Even if we limit ourselves to the study of the "known" effective field theory (EFT) of gravity beneath the Planck scale, its scattering amplitudes display many surprising properties that are completely invisible at the lagrangian level such as the well-known soft theorem and decoupling of collinear graviton (in the Eikonal approximation [1] and in a full diagrammatic analysis [2]), infinite dimensional symmetries [3][4][5], and "gravity = gauge 2 " [6,7]. It is possible that some of those amazing properties of the amplitudes may be already revealed at the lagrangian level if we perform the path integral partially instead of going all the way to the amplitudes. Therefore, we are motivated to construct a "more effective" theory of gravity by integrating more modes out of the standard EFT of gravity. In this paper, as an example of such theories, we develop a Soft Collinear Effective Theory (SCET) for gravity at the leading and next-to-leading powers of λ. SCET is an EFT originally developed in [8][9][10][11][12] in the context of QCD for systematically and efficiently calculating amplitudes for clusters of highly collimated energetic particles with soft radiations, where λ is a small parameter used to characterize such region of the phase space and the lagrangian of a SCET is an expansion in powers of λ. Ref. [13] made an early attempt to construct a gravity SCET at the leading power. However, our gravity SCET disagrees with theirs, even at the leading power, in very fundamental ways such as the symmetry structures and the forms of gravitational Wilson lines. We will discuss the differences in Section 4.8. As SCET is largely an unfamiliar subject outside the QCD community, and also to avoid being misguided by some structures of QCD SCET that are not shared by gravity, we will develop a gravity SCET from the ground up, reviewing general concepts of SCET and introducing some specifics of gravity SCET in Section 2, moving on to the constructions of the leading power and next-to-leading power SCET lagrangians in Sections 3 and 4. In Appendices A.1-A. 3 we give explicit examples of full-theory amplitudes expanded to the next-to-leading power and show how they match the structures predicted by the gravity SCET. We will find that the structure of the gravity SCET lagrangian dictated by the effective symmetries and power counting rules indeed reveals many properties of gravity amplitudes that are completely obscure in the full theory (i.e., the "full EFT" of gravity valid beneath the Planck scale), without recourse to any actual calculations. For example, we will see that the gravitational soft theorem and absence of collinear IR divergences at the leading power are manifestly dictated from the outset by the symmetries of the effective lagrangian. Soft IR divergences may be present at the leading power but their sources will be explicitly isolated in the effective lagrangian. At the next-to-leading power, the structure of the gravity SCET lagrangian immediately translates to certain structures in the amplitudes. As illustrated by the explicit examples in Appendices A.1-A.3, this can greatly simplify the calculations of amplitudes, where a lengthy full-theory calculation with laborious expansions in λ and tricky cancellations of many terms can be reproduced by short scribbles in the EFT to calculate the few terms just enough for determining the entire amplitudes. We will also see other interesting structures of gravity SCET that may be relevant for deeper understanding of gravity. For example, in a gravity SCET that describes N distinct collinear directions, the diffeomorphism (diff) invariance of the full general relativity turns into N copies of diff invariance "effectively" (the precise meaning of which will be explained in Section 2.5), as in a QCD SCET with N collinear sectors "effectively" has N copies of SU(3) c gauge invariance. This suggests that the original full-theory S-matrix (which "contains" all SCETs with different values of N ) must have an infinite number of effective diff invariances in some way. It would be interesting to explore connections between this and the infinite dimensional symmetries of gravitational scattering amplitudes discovered recently [3][4][5]. We will also find that the N copies of diff invariance in gravity SCET lead to specific variations of nonlocal "dressing" of operators discussed in [14,15]. We will be led to our specific forms of nonlocal dressing as a consequence of effective gauge symmetries of SCET at long distances, while Refs. [14,15] arrived at a general notion of dressing by trying to answer the question of what might be the fundamental observables in the yet-to-be-found ultimate quantum theory of gravity. Building up gravity SCET In this section we build gravity SCET from the ground up starting from fundamental principles of EFT. Along the way, we review essential conceptual elements of SCET in order to make the paper accessible to the reader who is not familiar with "modern" effective field theories with a property called mode separation (to be discussed below). Some examples of such EFTs are Non-Relativistic QCD (NRQCD) [16,17], Non-Relativistic General Relativity (NRGR) [18], and SCET. The target phase space By design, an EFT is aimed only at a prescribed, limited region of the phase space characterized by a small parameter or parameters (e.g., "all particles have energy much below 1 TeV"). We therefore must begin by defining the target phase space of our gravity SCET. Imagine a scattering process in 4d Minkowski spacetime. 1 Identify most energetic particles in the initial and final states and cluster them into multiple collinear sectors, where a collinear sector is defined as a set of energetic particles (with energy of O(Q)) moving in similar directions (with angular spread of O(λ) 1). We require different collinear sectors to be well separated in direction. If two sectors are too similar in direction, merge them into one sector by choosing a larger λ. (If such merger leads to λ of O(1), our SCET is not an appropriate EFT for the process in question.) After thus identifying all collinear sectors, we are left with non-energetic particles, which we will collectively refer to as the soft sector. We focus on the processes where soft particles are around only because they are required by nature to be around given the presence of the collinear sectors, not because we intended to include them. The energy scale of the soft sector will then turn out to be O(λ 2 Q) (explained in Section 2.3). Our target phase space is thus defined by the number of collinear sectors N , the hard energy scale Q, and the small parameter λ 1. The most important parameter is λ, which characterizes how well collimated the energetic particles are in each collinear sector. Our SCET effective lagrangian will be an expansion in powers of λ. It should be noted that each collinear sector in principle comes with its own λ and Q. To avoid an overly general presentation that beclouds main points, however, we assume a common λ and a common Q for all the collinear sectors. We will scale them independently only when it is necessary or convenient to do so. In order to focus on the properties of SCET that are solely associated with gravity, we make another simplifying assumption that a collinear splitting due to non-gravitational interactions such as QCD occurs with a splitting angle that is either much larger or much smaller than λ. In the former case, we regard a non-gravitational "collinear" splitting as actually giving rise to two distinct collinear sectors. For the latter, we view the stream of almost exactly collinear, non-gravitationally splitting particles as a single massless "particle". The former would be a two-stage EFT where the full theory is first matched to a well-established gauge-theory SCET with a large "λ", which is then matched to a gravity SCET developed in this paper. The latter would be a two-stage EFT with the reversed order. Manifest power counting To have a systematic control of the special kinematics of its target phase space, an EFT must be equipped with well-defined rules for power-counting the small parameters that define the target phase space. For a SCET, this means that each term in its effective lagrangian must scale with a definite power of λ so that we can ignore higher order terms irrelevant for achieving the desired precision. But this is not good enough. If interaction terms in the lagrangian are allowed by the symmetries of an EFT and appear to be the largest contributions in terms of its power counting rules, they should not be shown to be actually absent or exhibit any "unexpected" systematic cancellations. The seeming existence of such terms would indicate that we have a wrong EFT and the theory should be revised such that those "largest" terms would be manifestly absent from the outset by being forbidden by symmetry or deemed subleading by power counting. To the best of our knowledge, our theory is the first formulation of gravity SCET that passes this test of manifest 1 All spacetime indices in this paper will be raised/lowered/contracted via the Minkowski metric in the +−−− convention. power counting (see the end of Section 3.3 for more on this point and also Section 4.8 for an earlier attempt on SCET for gravity [13] where the symmetries and power counting rules allow us to write down terms that can actually be systematically removed). Restricting the phase space to the target phase space requires that the momentum space of virtual particles should be likewise restricted in an EFT. This is to ensure that power counting be manifestly compatible with unitarity, by having the momentum integrations along a cut scale in the same way as the corresponding phase space integrations in the optical theorem. So, in our SCET, only the collinear and soft momentum modes are allowed even for virtual particles. As usual in an EFT, virtual contributions from "missing" modes outside the target space can be systematically restored by including all possible effective interactions allowed by symmetry in the effective lagrangian and suitably adjusting their coefficients. There are an infinite number of such interactions, which is why we need well-defined power counting rules that allow us to truncate the effective lagrangian in a controlled way. Other underlying assumptions We assume that all particles have no or negligible mass compared to the soft energy scale, λ 2 Q, which is the lowest energy scale in the effective theory. If this assumption is violated, our SCET is not an appropriate description of the physics except in one situation: if all massive particles are much heavier than Q, we can just integrate them out to obtain an EFT expanded in inverse powers of those heavy masses. We can then regard this EFT as the "full" theory that our SCET provides an effective description of. Ultimately, the presence of gravity means that the full theory is necessarily an EFT expanded in powers of Q/M Pl . The collinear lightcone coordinates Let's introduce some notations for describing our target phase space in a manner convenient for power counting. We index the collinear sectors by i = 1, 2, . . . , N . For each i, we introduce a pair of null 4-vectors n + i and n − i satisfying where the spatial part, n + i , of n + i is taken to be in the direction of the i-th collinear sector, modulo an angular ambiguity of O(λ). 2,3 It is convenient to also define 4-vectors n + i and n − i as In this basis, an arbitrary 4-vector a can be written as No summation is implied for the repeated sector index i here and throughout the paper. A summation over collinear sectors will always be indicated explicitly. 3 Our normalization convention differs from the standard normalization in the SCET literature, n+ i · n− i = 2, to prevent twos and halves from appearing when raising or lowering ± indices. with and Then, for arbitrary 4-vectors a and b, we have (2.7) Mode separation and scaling In an EFT, manifest power counting often requires mode separation, i.e., a further division of the target phase space into subregions, if modes in different subregions are found to scale differently in terms of the small parameters that define the target phase space. Mode separation is arguably the feature that distinguishes "modern" EFTs like SCET from the classic Wilsonian EFTs. In the latter, there is only one type of momentum modes, which scale as Λ −1 with the cutoff Λ. In more general EFTs, mode separation is often necessary to achieve manifest power counting. Below, we describe the momentum modes and their scalings in our target phase space. The collinear momentum scaling Let's begin with the λ scaling of momenta in the collinear sector along n + i of a given arbitrary i, or the n i -collinear sector for short. Let p be the 4-momentum of an n i -collinear particle that is either on-shell or nearly on-shell due to emissions/absorptions of soft particles. By definition, the particle is moving approximately in the n + i direction, carrying a large energy of O(Q). We thus have p + i ∼ Q. Next, again by definition, a typical angle between n + i and the exact direction of p is O(λ). Hence, p ⊥ i ∼ λQ. Finally, in order for p to be (nearly) on-shell, the p + i p − i and p ⊥ i ·p ⊥ i terms in p 2 = 2p + i p − i + p ⊥ i ·p ⊥ i must be of the same order in λ so that they could add up to zero. The cross-collinear scaling and reparametrization invariance (RPI) Let's now consider the case in which we pick one momentum from the n i -collinear sector and another from the n j -collinear sector with i = j. Since different collinear sectors in our target phase space are well separated in directions from each other, we have which we dub the cross-collinear scaling. In other words, an n i -collinear momentum p i scales as p i ∼ (1, 1, 1) j for all j = i. We must be careful when n + i and n + j point back-to-back. For example, for n + i = (0, 0, 1) and n + j = (0, 0, −1), we might be tempted to choose n ± i ∝ (1, 0, 0, ±1) and n ± j ∝ (1, 0, 0, ∓1). But this would lead to n + i · n − j = 0, violating the cross-collinear scaling (2.8). Notice, however, that the conditions (2.1) do not determine n − i uniquely from a given n + i . There is also freedom even in the choice of n + i , due to the O(λ) ambiguity in the direction of n + i and the arbitrariness in the normalization of n + i . These ambiguities are fundamental redundancies in SCET and a SCET lagrangian must be invariant under all possible redefinitions of n ± i that preserve the conditions (2.1) and the scaling p ∼ (1, λ 2 , λ) i [19,20]. This property is called reparametrization invariance (RPI) and it should be noted that each collinear sector comes with its own RPI. So, a more precise statement is that the cross-collinear scaling law (2.8) holds for generic choices of n ± i and n ± j , even when n + i and n + j are back-to-back. For example, for n + i = (0, 0, 1) and n + j = (0, 0, −1), the cross-collinear scaling holds for the choice n + i ∝ (1, 0, 0, 1), n − i ∝ (5, 0, 4, 3), n + j ∝ (1, 0, 0, −1), n − j ∝ (5, 0, −4, −3). In the remainder of the paper, we tacitly assume that a generic choice has been made such that the cross-collinear scaling law holds. The soft momentum scaling Let us now find the λ scaling of soft momenta. By definition, soft particles are much less energetic than the collinear particles, with the energies low enough that soft particles can be added or removed without changing the existing kinematics of the collinear sectors that we have already defined. 4 They are also not preferentially associated with any particular direction. 5 Therefore, a soft 4-momentum must scale as ∼ (λ a , λ a , λ a ) i for all i with some a > 1. Now, imagine a generic Feynman diagram with N collinear sectors and then attach a soft particle to an n i -collinear line with momentum p c . The scaling of the p − i c component (i.e., the second entry of (1, λ 2 , λ) i ) tells us that a must be ≥ 2 in order for the attachment of the soft particle to preserve the n i -collinearness of the n i -collinear line. Then, in order to minimize λ suppressions from derivatives acting on the soft particle's field in the SCET lagrangian, we are interested in the a = 2 case to ensure that the largest soft contributions are taken into account by the effective lagrangian. 6 To summarize, a soft 4-momentum p scales as p ∼ (λ 2 , λ 2 , λ 2 ) i for all i, which we simply write as p ∼ (λ 2 , λ 2 , λ 2 ) without referring to any i. Soft momenta have p 2 ∼ λ 4 . Comparison with Soft Graviton Effective Theory Before we move on to next step, we would like to compare our gravity SCET with SGET (Soft Graviton Effective Theory [21,22]), which was originally proposed by [21] to demonstrate that a low cutoff (∼ meV) that one might want to impose on the gravity sector to render the small cosmological constant natural is not necessarily in contradiction with the known high cutoff ( TeV) of the standard-model matter sector. The difference between SGET and our gravity SCET-just like the 4 If the attachment of a "soft" particle to an ni-collinear line of some i was found to knock the line's momentum out of the (1, λ 2 , λ)i range, we should have actually defined a separate collinear sector for that "soft" particle in the first place with its own smaller value "Q". Our simplifying assumption of universal Q and λ, however, excludes such possibilities from consideration. 5 If the soft particles as a whole had some directional preference, they should form their own collinear sector with a smaller Q and a larger λ. Again, our simplifying assumption of universal Q and λ excludes such possibilities. 6 Our SCET thus belongs to the category of SCET often referred to as SCETI in the SCET literature, in which our soft particles tend to be called ultrasoft particles. difference between any two EFTs-is how their target phase spaces are prescribed. In SGET, all graviton fields are soft, and matter fields only have soft fluctuations around a point on the mass shell, similarly to Heavy Quark Effective Theory (HQET) [23] where all gluons are soft and quarks fluctuate only softly around a point on the mass shell. Mode-separating fields As a first step toward manifest power counting, we have separated momentum modes in our target phase space and classified them into distinct groups depending on how they scale with λ. So, each particle species now comes with N + 1 distinct propagators: the n i -collinear propagators (i = 1, . . . , N ) and the soft propagator. This mode separation can be explicitly facilitated in a quantum field theory by introducing an independent interpolating field for each propagator type, for each particle species. That is, for each field Φ(x) of the full theory, 7 we introduce N distinct collinear fields Φ i (x) (i = 1, . . . , N ) and a soft field Φ s (x), where Φ i (x) and Φ s (x) only contain the n i -collinear and soft Fourier modes, respectively. That is, when ∂ µ acts on Φ s , it scales as ∂ ∼ (λ 2 , λ 2 , λ 2 ). When it acts on Φ i , it scales as ∂ ∼ (1, λ 2 , λ) i . 8 It should be emphasized that we are not increasing the degrees of freedom but merely giving different names to the different groups of Fourier modes of each Φ(x) in order to facilitate manifest power counting. Factorized effective gauge symmetry This is the most profound implication of mode separation and the heart of SCET as recognized in the original QCD SCET [10,11], but it is a general observation that should also apply to gravity SCET. Since the splitting of a full-theory field Φ into Φ i (i = 1, . . . , N ) and Φ s also equally applies to gauge fields (including the graviton field), each full-theory gauge symmetry G (including the diffeomorphism invariance) splits into N + 1 distinct gauge symmetries: the n icollinear gauge symmetries G i (i = 1, . . . , N ) and the soft gauge symmetry G s . An n i -collinear gauge transformation U i (x) ∈ G i should only contain n i -collinear Fourier modes so that U i (x) maps the associated n i -collinear gauge field to itself. Then, U i (x) also maps any other Φ i to a Φ i . Similarly, a soft gauge transformation U s (x) ∈ G s only contains soft Fourier modes, mapping a Φ s to a Φ s and also a Φ i to a Φ i . Therefore, a Φ i is charged under both G i and G s , while a Φ s is charged under G s . We must not charge a Φ i under any G j with j = i, because U j (x) with j = i would not map a Φ i to a Φ i , thereby jeopardizing mode separation in that mode separation would depend on the gauge choice. Similarly, we must not charge a Φ s under G i with any i. Such compatibility of gauge invariance and mode separation also requires us to forbid all the gauge 7 Recall that we have assumed all particles have no or negligible mass, so the particle content of the EFT agrees with that of the full theory. 8 Since a soft momentum can be added to an ni-collinear momentum without destroying the ni-collinear scaling, the ni-collinear scaling is ambiguous up to a soft momentum by definition. So, strictly speaking, the momentum modes in Φi actually scales like ∼ (1, λ 2 , λ)i + (λ 2 , λ 2 , λ 2 )i, which violates the spirit of manifest power counting. Strictly manifest power counting can be recovered at the expense of notational simplicity by introducing the so-called label momenta as it was done in the original SCET papers [8][9][10][11][12]. We opt for a notational simplicity and just remember that Φi(x) also contains soft fluctuations, following the "position space" formulation of SCET [24,25]. In particular, since we will be concerning next-to-leading-power corrections later in this paper, we must watch out for the O(λ 2 ) fluctuations in the ⊥-components of collinear momenta. transformations in G that do not belong to any G i or G s . Since there is still a one-to-one and onto correspondence between the allowed modes of gauge fields and those of gauge transformations, the EFT still possesses just enough gauge transformations to gauge away all unphysical polarizations of the gauge fields if we so wish. How can we understand such splitting of G → G 1,...,N,s from a "microscopic" viewpoint? Imagine a set of all full-theory diagrams with the external lines in the target phase space. We can then "derive" the set of all EFT diagrams by identifying every full-theory propagator that do not belong to the target phase space and then shrinking all such propagators and tucking them into effective vertices. Consider, for example, an n i -collinear electron in the initial state and let it emit an n jcollinear photon with j = i in the full theory (i.e., QED). Then, the electron's propagator after the emission would necessarily be highly off-shell and out of the target phase space, so it must be shrunk and tucked into an effective vertex. Thus, in the EFT, such emission is described by some effective operator, not by a vertex from the covariant derivative in the n i -collinear electron's kinetic term. In contrast, if the n i -collinear electron emits an n i -collinear photon with the same i, the electron's propagator after the emission remains n i -collinear, so such emission continues to be described by a vertex from the covariant kinetic term of the electron. Such heuristic considerations suggest that the only covariant derivatives that may act on the n i -collinear electron are those associated with the n i -collinear photon (with the same i) or the soft photon. In other words, each collinear sector has its own gauge invariance associated with the collinear photon modes in that sector, and all sectors share a common gauge invariance associated with the soft photon. To summarize conceptually, each full-theory gauge symmetry G is reduced in the SCET to its subset as where the symbol indicates a semi-direct product as in the same convention as we would write Poincaré = Lorentz Translations. The semi-direct product expresses the property that the generators of G 1,...,N are also charged under G s , just like the translation generators are also charged under the Lorentz group. In our discussion above, the semi-direct product is manifested in the fact that a Φ i is charged under both G i and G s , while a Φ s is charged under G s . In the theory of gravity in terms of the vierbein (or tetrad), the gravitational part G grav of G is given by which we call "diff×Lorentz" for short. Although (2.9) is conceptually a reduction of gauge symmetry G to its subset corresponding to the collinear and soft modes, 9 in practice it acts as if G was enhanced to N + 1 copies of G when we try to constrain the structure of the effective lagrangian. Essentially, this is the source of the power of SCET. Unfortunately, the form of effective gauge symmetry as in (2.9) is not yet fully compatible with mode separation. The problem is the symbol, that is, the fact that a Φ i is charged under not only G i but also G s . This means that a gauge covariant derivative acting on a Φ i must contain both the n i -collinear and soft gauge fields, which we call A i and A s referring to all gauge fields collectively, including the graviton. Then, since A i and A s always appear together in the combination A i + A s , the collinear and soft modes of A are actually not separated. (That is, even if we write A as A i + A s , the theory doesn't know it.) To solve this problem and accomplish true mode separation, we redefine every Φ i schematically as where R Φ denotes the gauge representations (including the spin) of the Φ i and Y R Φ [A s ] is a functional of A s (and also a function of x) such that the new Φ i after the redefinition is invariant under G s . Such field redefinition was first proposed and worked out for internal gauge symmetries in the context of the original QCD SCET [11]. For gravity SCET, we will determine the form of Y for diff×Lorentz in Section 3.2.3. With the field redefinition (2.11), the effective gauge symmetry becomes truly factorized and mode separation completely achieved. That is, rather than (2.9), we now schematically have which we call factorized effective gauge symmetry. Now, each field in the EFT is charged under only one of G s , G 1 , . . . , G N . It should be noted, however, that the implications of the " " in (2.9) have not gone away completely. In particular, the structure (2.12) inevitably double-counts soft modes and low-energy collinear modes. This double-counting has to be systematically removed by zero-bin subtraction [26]. To the best of our knowledge, our theory is the first formulation of gravity SCET that is consistent with the factorized gauge symmetry (see Section 4.8 for a comparison with the literature). Finally, being true symmetries of nature rather than redundancies of our description, the global part of G (e.g, the global Poincaré group, the global SU(3) color ) in the EFT is identical to that in the full theory. 10 Namely, for each particle species, all of its Φ i (i = 1, . . . , N ) and Φ s have the same, common, global charges as the corresponding full-theory Φ. Only gauge symmetries are factorized in the EFT. The structure of the effective action Although we adopt the "position-space" formulation of SCET [24,25] in our actual calculations (e.g., those in Appendices A.1-A.3), the language of Ref. [27] is convenient for expressing the SCET effective action S eff in a manner revealing mode separation and the symmetry structure (2.12): Here, S full [Φ s ] is exactly the full-theory action S full [Φ] with Φ replaced by Φ s . It is exactly the full-theory action because there is nothing "soft" about the soft sector in isolation, except that the running couplings in S full [Φ s ] should be evaluated at the scale µ ∼ λ 2 Q, corresponding to the only invariant energy scale of the soft sector, p 2 ∼ λ 4 Q 2 . Similarly, there is nothing "n i -collinear" about the n i -collinear sector in isolation. We could just boost the frame in the n + i direction by a rapidity of ∼ log λ so that the n i -collinear scaling ∼ (1, λ 2 , λ) i would now scale isotropically as ∼ (λ, λ, λ) i . Therefore, S full [Φ i ] must be exactly the full-theory action with Φ replaced by Φ i , except that the running couplings in S full [Φ i ] should be evaluated at the scale µ ∼ λQ, corresponding to the only invariant energy scale of the n i -collinear sector, p 2 ∼ λ 2 Q 2 . To be absolutely clear, the Φ i in (2.13) is the Φ i after the field redefinition (2.11), i.e., the one that is no longer charged under G s . Finally, the hard interactions, S hard [Φ 1 , . . . , Φ N , Φ s ], consists of all possible terms containing more than one sector. The factorized gauge symmetry structure (2.12) is evident in the part. The entire non-triviality of SCET, therefore, is the implications of the factorized gauge symmetry and power counting for the structure of S hard . Two types of renormalization The structure (2.13) implies the following two categories of renormalization. A diagram whose external lines all belong to the same sector renormalizes a vertex in S full [Φ that sector ]. Such renormalization is already taken into account by evaluating the running couplings in the S full at the appropriate scale just discussed in Section 2.6. Hence, no additional calculations are required in the EFT for such renormalizations. On the other hand, a diagram whose external lines belong to more than one sector renormalizes a vertex in S hard [Φ 1 , . . . , Φ N , Φ s ]. The "first" instance of such renormalization is matching, i.e., equating EFT and full-theory amplitudes at the hard scale µ ∼ Q to determine the "initial" values of the coefficients of effective operators in S hard . Then, we have a series of incremental renormalizations from matching the EFT at scale µ − dµ onto the EFT at µ, which gives us the renormalization group equations for those coefficients. Compatibility of power counting and gauge invariance For any gauge field (including the graviton), the relative λ dimensions between different components of the field can be completely fixed by requiring that power counting and gauge invariance be compatible in the sense that the Ward identities should be satisfied order-by-order in λ expansion. For example, consider an n i -collinear U(1) gauge field A µ with an n i -collinear U(1) gauge invariance under A µ −→ A µ + ∂ µ α with α only containing n i -collinear Fourier modes. This invariance leads to the Ward identity that an amplitude should vanish when the polarization vector ε µ (p) of a "photon" with an n i -collinear momentum p is replaced with the p µ . In order for such Ward identities to be satisfied order-by-order in λ, the relative λ dimensions between the components of an ε µ (p) must be the same as those between the components of the associated p µ . That is, we must have A µ (p) ∼ λ a p µ , i.e., A ∼ λ a (1, λ 2 , λ) i , with some a. Moreover, gauge transformations must be able to gauge away the unphysical components of A µ , so we must have α ∼ λ a with the same a so that A µ ∼ ∂ µ α. The overall scaling parameter a can then be fixed by requiring the kinetic term in the effective action to be O(λ 0 ). This is because our effective action is an expansion in powers of λ and we are assuming that interactions are weak, i.e., that the kinetic terms dominate the action. Below, we will apply this line of reasoning to the collinear and soft graviton fields to determine how they scale with λ. Graviton fields and their scalings We describe the spacetime geometry in terms of the vierbein e µ ν , where the first index µ is a vector index for the Lorentz part of the diff×Lorentz gauge group, and the second index ν is a 1-form index for the diff. By definition it satisfies g µν = η ρσ e ρ µ e σ ν at every spacetime point. We write the inverse vierbein asē µ ν , where the first index µ is for the diff and the second ν is for the Lorentz. We haveē µ ρ e ρ ν = δ µ ν andḡ µν =ē µ ρē ν σ η ρσ by definition, whereḡ µν is the inverse metric. 11 We define the graviton field ϕ µ ν via e µ ν ≡ δ µ ν + ϕ µ ν . (2.14) This then givesē We also define the metric fluctuation h µν via g µν ≡ η µν + h µν . 12 We then have h µν = ϕ µν + ϕ νµ + ϕ ρµ ϕ ρ ν . Most importantly, in the SCET, mode separation tells us that the graviton field ϕ µν should be split into N collinear graviton fields ϕ i µν (i = 1, . . . , N ) and a soft graviton field ϕ s µν . Now let us find out how ϕ i,s µν scale with λ. First, as we just discussed in Section 2.8, the relative λ scalings between different components of ϕ i,s µν can be completely fixed by the compatibility of power counting and gauge symmetry. Under an infinitesimal n i -collinear or soft diff×Lorentz gauge transformation, ϕ i,s µν transforms as is an n i -collinear or soft local Lorentz gauge transformation parameter, and the ellipses represent terms containing both ϕ i,s µν itself and either one of the transformation parameters. (We will justify why we can ignore the ellipses later.) Then, the compatibility of power counting and local Lorentz gauge invariance tells us that we must have ϕ i,s µν ∼ λ a ω i,s µν with some a. Furthermore, we must be able to use this gauge transformation to gauge away the unphysical, anti-symmetric piece of ϕ i,s µν if we wish. This requires a = 0, and hence ϕ i,s µν ∼ ω i,s µν . Then, since ω i,s µν is anti-symmetric, we also have ϕ i,s µν ∼ ϕ i,s νµ . Similarly, the compatibility of power counting and diff gauge invariance tells us that ϕ i,s µν ∼ ∂ ν ξ i,s µ . Here, since ξ i µ and ξ s µ respectively contain only n i -collinear and soft Fourier modes, the ∂ ν acting on ξ i µ scales as Thus, the compatibility of power counting and diff×Lorentz gauge invariance completely fixes the relative scalings between the components of ϕ i,s µν and ξ i,s µ as 11 We use bars to distinguish the inverse vierbein and inverse metric from the vierbein and metric because of our convention stated in footnote 1. 12 Our graviton fields are not canonically normalized. To make them canonical, we must redefine the fields as with p s ∼ (λ 2 , λ 2 , λ 2 ). To determine the overall scaling exponents b and c, we need to look at the kinetic terms for ϕ i,s µν in the action and require them to be O(λ 0 ) as we discussed in Section 2.8. The kinetic terms for ϕ i,s µν in the effective action (2.13) have the form where different ways of contracting the spacetime indices • are summed over with some coefficients, but such details are irrelevant for our discussion here. The running reduced Planck mass M Pl (µ i,s ) must be evaluated at the appropriate energy scale, µ i ∼ λQ or µ s ∼ λ 2 Q, as we discussed in Section 2.6. Now, for the n i -collinear graviton, because of (2.17) and the fact that contracting any two collinear momenta gives λ 2 , the integrand of (2.19) scales as λ 2b λ 6 . Furthermore, the d 4 x integration is dominated by the region of x (after subtracting an overall spacetime translation) in which p·x = The above kinetic action thus scales as λ 2b λ 2 . Demanding this to be ∼ λ 0 gives b = −1. To summarize, we have and Let's repeat the same exercise for the soft graviton. In this case, the integrand scales as λ 2c λ 12 . The x in (2.19) is now conjugate to a soft momentum, so it scales as x ∼ (λ −2 , λ −2 , λ −2 ), leading to d 4 x ∼ λ −8 . We then find c = −2, i.e., In [13], the scalings (2.20) and (2.22) are determined by explicitly power-counting the propagator h µν h ρσ in a generic covariant gauge, painstakingly component-by-component in µ, ν, ρ, σ to obtain (2.20). Our derivation is much more efficient, thanks to the principle of compatibility of power counting and gauge invariance. (See also comments in Section 2.10.3 for more on this last point.) Finally, using the scaling relations obtained above, let's find out the λ dimension of the ellipses in (2.16). The ellipses consist of terms with one ϕ i,s µν and either one of ∂ ρ ξ i,s σ or ω i,s ρσ with two of the four indices being contracted. So, the terms in the ellipses scale as p µ p ν p ρ p ρ /λ 2 ∼ p i µ p i ν in the collinear case, and as p µ p ν p ρ p ρ /λ 4 ∼ p s µ p s ν in the soft case. Thus, compared to the terms explicitly shown in (2.16), the ellipses are suppressed by λ in the collinear case, and by λ 2 in the soft case. Therefore, even though the ellipses are also first order in the infinitesimal transformation parameters, we can still ignore them on the basis of λ expansion. In particular, we do not have to assume that ϕ is also infinitesimal to justify our ignoring the ellipses in (2.16). This will be important later when we derive some results that are valid to all orders in 1/M Pl (but at a fixed order in λ). Matter fields and their scalings Let us quickly repeat the above exercise for matter (i.e., non-gravitational) fields for the sake of completeness. The reader does not need this section to understand the rest of the paper and may skip to Section 2.11. For the cases of spin 0, 1/2, and 1, the results are well known in the literature but our derivations of the results for the collinear spin 1/2 and 1 cases, which do not refer to any detailed forms of the lagrangians nor propagators, are not found in the existing literature to the best of our knowledge. For the spin-3/2 case, our results also seem new. Spin 0 For a soft spinor field ψ s , demanding d 4 x ψ s ∂ψ s ∼ λ 0 with d 4 x ∼ λ −8 and ∂ ∼ λ 2 tells us that For an n i -collinear spinor ψ i , it is more complicated because the spatial non-isotropy of n i -collinear momenta tells us that different spinor components of ψ i might scale differently. To skirt around this complication, let's first boost the frame in the n + i direction by a rapidity ∼ log λ so that n i -collinear momenta now scale isotropically as ∼ (λ, λ, λ) i in the new frame. In this frame, we simply have d 4 x ∼ λ −4 and ∂ ∼ λ in d 4 x ψ i ∂ψ i ∼ λ 0 , implying that we have ψ i ∼ λ 3/2 for all components of ψ i . Now, if ψ i is a right-handed spinor, this boost multiplied the positive and negative helicity components of ψ i by a factor of λ 1/2 and λ −1/2 , respectively, when we got to the new frame. So, the positive and negative helicity components of ψ i must have scaled as λ and λ 2 , respectively, in the original frame before the boost. If ψ i is a left-handed spinor, the scalings of the positive and negative helicity components are simply reversed from the right-handed case, so the negative helicity component of ψ i scales as ∼ λ and the positive helicity as ∼ λ 2 . To pick up the helicity components along the n + i direction from a spinor, we define the helicity projection operators: where our convention for the Dirac γ matrices is such that γ µ γ ν + γ ν γ µ = 2η µν 1. Any spinor ψ can then be decomposed as ψ = P + i ψ + P − i ψ. Since P ± i commutes with γ 5 , we can do these helicity projections separately for each chirality of ψ. Then, we see that P + i picks up the positive helicity component along the n + i direction from a right-handed spinor, and the negative helicity component from a left-handed spinor. Whichever remaining helicity component is picked up by P − i . Therefore, the results we have found above can be summarized as Finally, with respect to an n j -collinear direction with j = i, we have because different collinear sectors are well separated in direction so both of P ± j pick up the bigger of the two components from ψ i (i.e., the one scaling as ∼ λ). Since the small components of spinors (i.e., those scaling as λ 2 ) have the wrong helicity to be on-shell, they are not propagating degrees of freedom. It is therefore a common practice to integrate them out from the effective lagrangian. However, for the purpose of keeping track of RPI, it is convenient to retain the small components in the lagrangian as auxiliary fields, similarly to the method proposed by [28] for maintaining manifest RPI in HQET. (It is also morally similar to keeping the F and D components in a supersymmetric lagrangian for a better bookkeeping of supersymmetry.) This is quite obvious for S full [Φ i ], where there is nothing "collinear" about this action in isolation. For S hard , the small components can be ignored at the leading power but should be put back when we move on to the next-to-leading power to make RPI manifest and take into account constraints from RPI; we will come back to this point later in Section 4.5. Needless to say, the above comment also applies to the analogous small components of higher spin fields discussed below. Spin 1 Since we have assumed that our particles all have no or negligible mass, a spin-1 particle must be a gauge boson. Then, as we already discussed in Section 2.8, the compatibility of power counting and gauge invariance tells us that A i µ ∼ λ a p i µ for an n i -collinear spin-1 gauge boson A i with a momentum p i ∼ (1, λ 2 , λ) i . Similarly, we have A s µ ∼ λ b p s µ for a soft spin-1 gauge boson A s with p s ∼ (λ 2 , λ 2 , λ 2 ). Then, the requirement d 4 x ∂A∂A ∼ λ 0 tells us that a = b = 0, i.e., The above results are obtained in the literature by examining the explicit expression of the propagator A µ A ν in a generic Lorentz-covariant gauge, or by expanding the kinetic action explicitly in the light-cone coordinates without choosing the gauge and demanding that each term is ∼ λ 0 . A conceptual advantage of our derivation based on the compatibility of power counting and gauge invariance is that it makes it clear why we want to work in a covariant gauge or without choosing the gauge initially, because the compatibility would fail if a non-covariant gauge is imposed. For example, let's choose a lightcone gauge by setting A + i = 0. This completely kills diagrams that are proportional to the polarization + i (p). But these diagrams are the ones that would be the leading diagrams in the Ward identity from µ (p) → p µ as they would be proportional to p + i . Thus, the λ expansion of an amplitude does not agree with that of the corresponding Ward identity-hence incompatible in our language-in the light-cone gauge. This does not necessarily mean that formulating a SCET in a non-covariant gauge is wrong, but just that ensuring or checking gauge invariance of the theory would become a complicated problem (see [29,30] for a manifestation of the complications) because gauge invariance would relate terms of different orders in λ in non-covariant gauges. This is why we have elevated the compatibility of power counting and gauge invariance to a guiding principle for constructing a SCET. Spin 3/2 Again, since our particles are all assumed to have no or negligible mass, a spin-3/2 field ψ µ must be a gauge field, i.e., a gravitino. The gauge transformation (i.e., the supergravity transformation) has the form ψ µ → ψ µ + D µ χ + · · ·, where the gauge transformation parameter χ is a spinor and the ellipses represent higher order terms analogous to the ellipses of (2.16). Again, the compatibility of power counting and gauge invariance tells us that ψ µ ∼ p µ χ. Then, for a soft gravitino ψ s µ , the λ invariance of the kinetic action tells us that For an n i -collinear gravitino ψ i µ , we use the same trick as we did for the spin-1/2 case and boost to an "isotropic frame". In this frame, we have ψ i µ ∼ λ 3/2 , and hence χ i ∼ λ 1/2 . This means that we must have had P + i χ i ∼ λ 0 and P − i χ i ∼ λ in the original frame before the boost. Therefore, from ψ i µ ∼ p i µ χ i in the original frame, we see that an n i -collinear gravitino ψ i µ must scale as for the same reason mentioned for the similar relation for the spin 1/2 case. Nonlocality in SCET Having discussed the symmetry structure (2.12) and worked out the power counting rules for individual fields above, let us comment on a rather unusual feature of SCET, namely, its nonlocality. In SCET, we actually have two types of nonlocality. First, our power counting rules allow different fields to be located at different spacetime points in S hard as we will describe more precisely below in Section 2.11.1. Second, the mode separation implies the fields themselves as well as the (anti-)commutators among them are "smeared". This will be described in Section 2.11.2. Of course, despite these nonlocal building blocks, a SCET that is matched onto a local full theory is local. Nonlocality in hard interactions For a generic n i -collinear field Φ i (x), we have . This means that the λ power counting does not allow the Taylor expansion of Φ i (x + i , x − i + s, x ⊥ i ) in powers of s to be truncated at any finite order in s. Therefore, different n i -collinear fields with the same i can have different x − i coordinates in S hard [8][9][10]. On the other hand, since ∂ + i Φ i ∼ λ 2 Φ i and ∂ ⊥ i Φ i ∼ λΦ i , the Taylor expansions in the x + i and x ⊥ i coordinates can be (actually, must be, for manifest power counting) truncated at a finite order. So, all n i -collinear fields in S hard must have the same x + i and x ⊥ i coordinates. 13 This is in stark contrast to the situation in the more familiar Wilsonian EFTs in which all components of a derivative are equally suppressed by the cutoff Λ, rendering the effective lagrangians completely local in all coordinates. As an example, imagine two n 1 -collinear scalars φ 1 , χ 1 , and two n 2 -collinear scalars φ 2 , χ 2 . Then, S hard might look like this: Wilson coefficient" C(s 1 , t 1 , s 2 , t 2 ) may be determined by matching the EFT onto the full theory or related by symmetry to the Wilson coefficient of another operator. The physical length scale of the nonlocality in the x − i coordinate is O(Q −1 ). Since our shorthand expression p + i ∼ λ 0 actually means p + i ∼ Q, rapid oscillations of e ip·x ∼ e ip + i x − i will damp the "extra" dx − i integrations in S hard (such as ds 1 . . . dt 2 in the above example) once two fields get separated by a distance larger than ∼ Q −1 in the x − i component. This length scale itself is expected from the fact that we are integrating out off-shell modes whose virtuality is of O(Q). However, if our EFT were a Wilsonian EFT with a cutoff Λ ∼ Q, such nonlocality would not actually lead to nonlocal operators because all modes in the EFT would have wavelengths much longer than O(Q −1 ) so would not be able to probe the nonlocality. In SCET, in contrast, there are momentum modes with components of O(Q), so they can actually probe nonlocality of length scale of O(Q −1 ). Nonlocality in field operators Due to mode separation, field operators in SCET are themselves nonlocal. First, consider a soft scalar field φ s (x) and its canonical conjugate momentum π s (x). Since φ s and π s only contain soft Fourier modes, their canonical commutation relation is given by [ is a 3-dimensional "δ-function" made only of soft Fourier modes rather than all Fourier modes. Therefore, δ 3 s ( x) is not exactly point-like at x = 0 but "smeared" over a length scale of O(λ −2 Q −1 ), which is much larger than O(Q −1 ), i.e., the shortest length scale in the EFT. The (anti-)commutation relations among collinear fields also have a similar, "smeared" kind of nonlocality in their respective x + and x ⊥ directions. We will see in Section 3.2.3 that the realization of the soft diff×Lorentz symmetry through the field redefinition (2.11) actually "exploits" the smearing in the soft graviton field. Gravity SCET at the Leading Power (LP) Let us now more explicitly construct gravity SCET at the Leading Power (LP). When we talk about S full [Φ i,s ] in the effective action, LP literally refers to O(λ 0 ) terms in S full . When we talk about S hard , on the other hand, LP actually refers to the leading nontrivial order in λ, the precise meaning of which will become clear in Section 3.2.1. Needless to say, when we say Next-to-Leading Power (NLP), that refers to one higher power of λ compared to the LP. No LP graviton couplings within each collinear or soft sector We will first show the absence of O(λ 0 ) gravitational interactions in S full [Φ i,s ] in the effective action (2.13), based only on mode separation, power counting, and the factorized gauge symmetry (2.12). No LP graviton couplings within each collinear sector The mode separation (2.13) and λ power counting tell us that the n i -collinear graviton field has no O(λ 0 ) interactions in S full [Φ i ]. As we already discussed, S full [Φ i ] is exactly the full theory action at the energy scale ∼ λQ, containing all vertices that only join n i -collinear lines. In particular, there is nothing "n i -collinear" about S full [Φ i ] in isolation because we can boost in the n + i direction by a rapidity of ∼ log λ to a frame in which what used to scale as ∼ (1, λ 2 , λ) i in the original frame now scales as ∼ (λ, λ, λ) i . (This is a global Lorentz boost so the graviton field also transforms covariantly just like everyone else.) If we do dimensional analysis in such a frame, each mass dimension of the fields and derivatives in S full [Φ i ] simply counts as λQ, because this is the only dynamical scale in S full [Φ i ]. Then, since every gravitational interaction comes with a positive power of 1/M Pl , it comes with a positive power of λQ, thereby vanishing as λ → 0. We thus clearly see that S full [Φ i ] has no LP n i -collinear graviton couplings to n i -collinear particles, including n i -collinear gravitons themselves. Diagrammatically, an n i -collinear graviton line can never be attached to any n i -collinear line with the same i at the LP. Building blocks of hard interactions Let us now talk about S hard [Φ 1 , . . . , Φ N , Φ s ] in the effective action (2.13). Matching The very first step for constructing S hard [Φ 1 , . . . , Φ N , Φ s ] is matching. To perform matching, turn off all the interactions in the EFT that are forced upon us by the factorized gauge symmetry (2.12). Let's call this limit purely hard. The purely hard limit still leaves the global part of G-e.g., the global Lorentz invariance-completely intact in the EFT. The limit also keeps all interactions that are not required by (2.12). 14 The purely hard limit is not applied to the full theory. Upon matching, purely hard amplitudes from the EFT are set equal to the corresponding amplitudes from the full theory. This determines S pure [Φ 1 , . . . , Φ N ], that is, the purely hard part of S hard [Φ 1 , . . . , Φ N , Φ s ]. Note that there is no Φ s in S pure [Φ 1 , . . . , Φ N ] because, as stated in Section 2.1.1, a soft particle is there only when it is "required by nature"-which in the present context means gauge invariance-so the purely hard limit excludes soft particles from consideration. The warning we mentioned at the beginning of Section 3 regarding the meaning of "LP" for S hard can now be stated more clearly: When we talk about S hard , LP refers to the lowest λ dimension in S pure . Matching in the purely hard limit can be done at any desired order in the number of loops and coupling constants of the full theory, such as 1/M Pl (Q). In the following analyses, we will see that once S pure is given, S hard can be uniquely constructed at the LP, thanks to the factorized gauge symmetry (2.12). We first discuss collinear graviton interactions in Let O i be the product of all n i -collinear objects in any one of the terms in S hard [Φ 1 , . . . , Φ N , Φ s ], where O i may be a single n i -collinear field or the product of n i -collinear fields with or without derivatives, γ-matrices, etc. A key point is that the factorized effective gauge symmetry (2.12) requires O i to be not only invariant under all G j with j = i but also under G i itself, because nothing else other than O i itself is charged under G i within the same term of S hard . This seemingly trivial requirement turns out to be extremely powerful for constraining possible hard interactions, going much further than the constraints from the global part of G common to all sectors. Consider an arbitrary n i -collinear local operator Φ (r) i (x) that is a scalar under the n i -collinear diff group and transforms covariantly as a representation r under the n i -collinear local Lorentz group. We make it a diff scalar even for an integer-spin r by appropriately multiplying it by vierbeins. If derivatives are acting on an n i -collinear field, we include all of them inside Φ is literally just the field operator of an n i -collinear particle, this invariance is trivially implied by the absence of LP n i -collinear graviton couplings to n i -collinear particles discussed in Section 3.1.1. However, as we said above, Φ (r) i may contain derivatives and vierbeins in addition to the "elementary" field operator. Therefore, we must show the invariance of Φ (r) i without referring to what it is made of. So, let's directly examine an infinitesimal n i -collinear diff×Lorentz gauge transformation on Φ where Σ µν r are the Lorentz generators for the representation r, satisfying the algebra with (η αβ ) = diag (1, −1, −1, −1) and without a factor of i on the right-hand side. According to (2.21), the term with ξ µ i in (3.1) scales as ( i , so it should be discarded at the LP. It turns out that the ω i µν term also scales as ∼ λΦ (r) i and hence should be discarded. To see this, let's boost in the n + i direction by a rapidity of ∼ log λ so that n i -collinear momenta now scale as ∼ (λ, λ, λ) i . Then, the scaling law (2.21) in the boosted frame becomes are both Lorentz covariant, the fact that the latter is suppressed by λ compared to the former must hold true in the original frame as well. We thus conclude that Φ (r) i does not transform at all at the LP under the n i -collinear diff×Lorentz gauge group. Therefore, Φ (r) i is already completely gauge invariant at the LP under all collinear diff×Lorentz groups, and hence it may by itself form O i . 15 It is possible that a purely hard process may have an n i -collinear graviton. However, the n icollinear graviton field ϕ i µν cannot be a Φ (One should also recall that those scaling laws come partly from the requirement that gauge fields in general should transform at the LP so that their unphysical polarizations can be gauged away at the LP.) Therefore, a "bare" ϕ i µν is not factorized gauge invariant and hence cannot appear in S hard . But we can find a covariant object Φ (r) i that contains a ϕ i µν . There are three possible covariant objects that can create or annihilate one graviton with the fewest possible derivatives. The simplest possibility is the n i -collinear Ricci scalar R i made of h i µν . Since 1-graviton terms in R i form a scalar built out of two derivatives and a collinear graviton field, they scale as p µ p ν p µ p ν /λ with an n i -collinear momentum p, so we have The next simplest object is the n i -collinear Ricci tensor R i µν . The 1-graviton terms in it scale as p µ p ν p ρ p ρ /λ with an n i -collinear momentum p, so we have Similarly, the 1-graviton terms in the n i -collinear Riemann tensor R i µνρσ scale as (3.5) 15 Note that Φ Beware of the antisymmetry in each of the µ-ν and ρ-σ pairs. So, the largest component is given by In all the three cases above, 2-graviton terms are further suppressed by an extra p·p/λ ∼ λ (and, thus, every extra collinear graviton field costs an additional λ). Any of these three objects, (3.3), (3.4), or (3.5), can be a Φ (r) i and hence can form an O i by itself. Other twoderivative covariant objects, such as the conformal or Weyl tensor C i µνρσ , are linear combinations of these three objects. These are all local operators, but since nonlocality in the x − i direction is allowed in SCET, we can also construct nonlocal invariant operators by integrating (3.3), (3.4), or (3.5) in the x − i direction, thereby providing us with invariant objects with fewer derivatives than two. But such constructions are already implicit in the general nonlocal structure of SCET described in Section 2.11.1 and hence not new given (3.3), (3.4), (3.5). One may also wonder if there are other nonlocal invariant objects built out of ϕ i µν . There are-they are called Wilson lines-but it turns out that they are nontrivial only if we proceed to the NLP, so they will be discussed in Section 4. The fact that Φ (r) i is invariant under all collinear diff×Lorentz gauge groups at the LP in particular means that there are no graviton couplings from the covariant derivatives that may be "hiding" in Φ is invariant under all diff×Lorentz groups at the LP, provided that it is a covariant object itself. 16 The field redefinition (2.11) makes it also invariant under the soft diff×Lorentz group. Therefore, as far as gravity is concerned, the D µ just becomes a ∂ µ at the LP. We should also recall that Φ On the other hand, the Lorentz vector indices from "naturally diff" indices-i.e., those of derivatives and of fields with spin ≥ 1-actually contain vierbeins and, hence, gravitons. However, the couplings of these gravitons are at most NLP. For example, consider ∂ µ with a Lorentz index µ, which contains a graviton coupling of the form φ i ν µ ∂ ν . From the scaling law (2.20), we have ϕ i ν µ ∂ ν ∼ p µ p ν p ν /λ ∼ p µ λ, where p is an n i -collinear momentum. In the worst case, this p µ would be contracted with a momentum in another collinear sector and thus counts as LP due to the cross-collinear scaling (2.8). So, φ i ν µ ∂ ν is suppressed at least by λ. If the ∂ ν above is replaced by a spin-1 gauge field, the conclusion is unchanged as spin-1 fields scale just like momenta (see (2.30)). If the ∂ ν is replaced by a spin-3/2 field, it is again the same conclusion (see (2.32)). Finally, if it is replaced by one of the indices of (3.4) or (3.5), it is again suppressed by at least λ. Therefore, at the LP, all vierbeins inside Φ (r) i should be replaced by Kronecker deltas. Combining this with the observation of the preceding paragraph, we conclude that there are no LP couplings of n i -collinear gravitons "hiding" inside Φ i . This might be a good place to discuss the implications of the global Lorentz invariance common to all sectors. For example, the way the indices of R i µν and R i µνρσ appear in S hard must obey the global Lorentz invariance. So, for example, R i − i − i must be part of a globally Lorentz invariant object such as (n − i ) µ (n − i ) ν R i µν and R i µν × (an n j -collinear operator) µν , where the cross-collinear scaling (2.8) should be applied to the latter to get R i − i − i . Needless to say, such considerations must 16 So, Φ be applied to all vector/spinor indices in S hard , not just those of the curvature tensors. To summarize, we first start with the purely hard limit, S pure [Φ 1 , . . . , Φ N ], matched at the leading nontrivial order in λ. We then turn on all collinear diff×Lorentz gauge groups of the factorized effective gauge symmetry (2.12) (but G s still turned off). At the LP, this does not change anything, i.e., we just have S hard [Φ 1 , . . . , Φ N ] = S pure [Φ 1 , . . . , Φ N ] at the LP. There are no collinear graviton couplings here. Soft graviton couplings in S hard at the LP Now, let's finally turn on G s . As we discussed in Section 2.5, this amounts to performing the field redefinition (2.11) for each Φ i in S hard [Φ 1 , . . . , Φ N ] obtained above (that is, just S pure [Φ 1 , . . . , Φ N ] itself). So, the goal of this section is to determine the form of the Y functional in (2.11) at the LP. Note that, by the definition of the purely hard limit introduced in Section 3.2.1, the soft graviton couplings we get in going from S pure to S hard are just those required by the soft gauge invariance. We will see that Y is completely determined by the soft gauge invariance alone at the LP. Consider again an arbitrary n i -collinear, Lorentz-covariant, diff-scalar operator Φ (r) i of representation r of the n i -collinear local Lorentz group, as we did in Section 3.2.2. First, let Φ (r) i refer to the "original" collinear field before the field redefinition (2.11), i.e., the one that is still charged under G s . So, analogous to (3.1), we have an infinitesimal soft diff×Lorentz gauge transformation: Here, unlike in (3.1), Φ i does transform at the LP. In particular, the ξ − i s ∂ − i piece inside the ξ µ s ∂ µ term above is LP, because we have ξ µ s ∼ λ 0 from (2.23) and the ∂ µ above scales as an n i -collinear momentum so ∂ − i = ∂ + i ∼ λ 0 . The remaining terms in ξ µ s ∂ µ are NLP or higher. All components of ω s µν scale as ∼ λ 2 from (2.23). Here, without any calculations, we immediately see two well-known pieces of physics. First, soft graviton couplings at the LP are all spin independent, i.e., independent of r, because the action of ξ µ s ∂ µ on Φ (r) i does not depend on r. This leads to the second point that spin dependence must be a Next-to-Next-to-Leading Power (NNLP) effect as it is only associated with the ω s µν term in (3.6). To determine the Y functional in (2.11) at the LP, let's focus on the LP piece of (3.6): where we have dropped the (r) because spin does not matter at this order. Therefore, in order for the field redefinition (2.11) to take care of the transformation (3.7), all we care about is the ξ s + i piece of the soft diff×Lorentz. That is, at the LP, we want to have with the soft graviton field ϕ s . To find such an object, let us return to the soft diff×Lorentz gauge transformation (2.16) for ϕ s µν . There, one sees that, in order to have any chance of getting a ξ s + i to match ( Since ξ s + i in (3.8) is not differentiated by a ∂ + i , we must integrate ϕ s + i + i by dx + i to remove the ∂ + i in (3.9). Therefore, we see that the structure exp ± has exactly the right transformation property to be Y i [ϕ s ], provided that we choose the sign and limits of integration appropriately to exactly match (3.8). (The coordinates x − i and x ⊥ i are implicit in (3.10).) An obvious choice is to place x 2 at the point x where the Φ i (x) that Y i is acting on is located. Then, we must choose the + sign in (3.10) and x 1 must be placed at a past infinity, x + i 1 → −∞, so that we do not get any contribution to (3.8) from the x 1 end of the integral. Namely, where the addition of sn µ + i to x µ is shifting x + i by s. Alternatively, we can place x 1 at x, choose the − sign in (3.10) and send x 2 to a future infinity, Which solution should we use? In field theory, when we integrate over energy p 0 from −∞ to ∞, we must rotate the contour infinitesimally counterclockwise in the complex p 0 plane to ensure that it is always the positive energy wave that propagates to the future. Now, suppose we have a soft graviton line with its energy going into a vertex from Y i . This graviton is annihilated at the vertex by the positive frequency part of ϕ s + i + i (x + sn + i ), which goes as exp[−ip + i s] in terms of the graviton's momentum p + i > 0 and integration variable s. Then, in order for the integration over s to converge after p + i is replaced by p + i + i with an infinitesimal positive , we must send s to −∞. If the soft graviton's energy is going out of the vertex from Y i , we must send s to +∞ instead. Thus, combining both cases, we arrive at the expression where for any quantum field f (s) = f + (s) + f − (s) with f + (s) and f − (s) being the positive and negative frequency parts, respectively, and s being a shift in one of the coordinates, we define ds as (3.14) As we argued above, this splitting is necessary so that we can Wick-rotate the energy components of all momentum integration variables consistently in the counterclockwise direction in the complex energy plane. 17 Let's check explicitly that the soft graviton couplings from Y i to n i -collinear particles are indeed LP. First, in the above expressions for Y i , the derivative ∂ − i should not act on ϕ s + i + i (x + sn + i ) because, if it did, it would give ∼ λ 2 due to the scaling law (2.22). Rather, it should only act on the n i -collinear field Φ i that Y i acts on. Then, ∂ − i in (3.13) counts as ∼ λ 0 because ∂ − i Φ i ∼ λ 0 Φ i . Next, to understand how ds in (3.13) scales, it is important to return to the field redefinition (2.11) to recognize that the x of Y i (x) is the x of the n i -collinear field Φ i (x), not an x of a soft field. So, we We thus see that the exponent in (3.13) is indeed LP. Combining this with the above symmetry argument that led to the expression (3.10), we conclude that Y i gives us all LP couplings of soft gravitons in S hard that are correct to all orders in 1/M Pl (µ s ), where µ s is the soft scale ∼ λ 2 Q. This is because, as we pointed out at the end of Section 2.9, the ellipses in (2.16)-which are ∝ O(1/M Pl ) when ϕ s µν is canonically normalized-are NNLP so (3.9) is an exact infinitesimal diff transformation for a finite ϕ s + i + i at the LP and also NLP. Finally, let us comment on the apparent nonlocality in Y i , where the soft graviton field is displaced in the x + i direction, a direction not allowed for a field to be displaced according to Section 2.11.1. To show that our SCET is a consistent EFT, we must demonstrate purely within the EFT, without appealing to the locality of the full theory, that this "forbidden" nonlocality is actually a fake. As the above derivation makes it clear, the origin of Y i is the field redefinition (2.11) to achieve a complete manifest separation of soft modes from collinear modes. We can make the soft graviton couplings manifestly local at the expense of manifest mode separation by simply undoing the field redefinition (2.11). Note that the redefinition (2.11) can be viewed as just a soft diff gauge transformation on Φ i without the accompanying transformation of the soft graviton field, which has the effect of removing the soft graviton field from a soft diff covariant derivative on Φ i . So, undoing (2.11) puts a derivative of Y i at where the soft graviton was inside the soft diff covariant derivative. From the form of Y i , we see that this derivative of Y i is just ϕ s in question. This is local. Therefore, the "forbidden" nonlocality in Y i is just an artifact of the field redefinition (2.11). Having seen that the nonlocality of Y i is illusory, we should stick to the field-redefined version of Φ i (i.e., the one in which Φ i is neutral under G s ), as manifest mode separation is more important than manifest locality from the EFT viewpoint. (In any case, there are other nonlocal objects in SCET that cannot be field-redefined away.) There is one thing, however, that still might seem puzzling. Namely, again in the basis where the field redefinition (2.11) is undone, there are no soft gauge fields appearing in S hard . But the collinear fields in S hard are displaced from each other in their respective x − directions and they are charged under G s in this basis, so how could S hard be gauge invariant under G s (which we know is nontrivial at the LP)? The resolution is that S hard is actually local from the viewpoint of soft gauge transformations because the physical length scale of nonlocality in S hard is only ∼ Q −1 as we discussed in Section 2.11.1, while the shortest wavelengths of the Fourier modes in soft gauge transformations U s (x) ∈ G s are ∼ λ −2 Q −1 , much larger than Q −1 . Therefore, at the LP and also NLP, soft gauge transformations effectively act as global transformations for S hard . And since S hard respects global symmetry, it is also invariant under G s at the LP and NLP. Nontrivial effects will arise only if we proceed to NNLP, which is a conceptually interesting and important problem that is beyond the scope of this paper. Thus, the "smearing" nonlocality discussed in Section 2.11.2 actually plays a role in making the theory consistent. With Y i included in S hard , we now have all LP interactions of soft gravitons, which completes the construction of S hard at the LP. The exponential (3.13) is often referred to as a soft gravitational Wilson line in the literature (usually without the splitting by (see footnote 17)), which was identified in full-theory analyses by studying amplitudes in the soft graviton limit [2,[34][35][36]. Our EFT derivation above only uses symmetry and power counting without ever looking at diagrams, which not only reveals the true meaning of soft Wilson lines as coming from the field redefinition (2.11) to achieve manifest mode separation, but also makes it self-evident that Y i gives all LP soft graviton couplings that are correct to all orders in the soft gravitational coupling, 1/M Pl (λ 2 Q), as we pointed out above. Summary of gravity SCET at the LP The EFT symmetry and power counting have told us, without any actual calculations, that: • There are no LP collinear graviton couplings anywhere in S eff except for those that may be already present in S pure [Φ 1 , . . . , Φ N ] at the matching. • All LP soft graviton couplings are in Recall our convention that a Φ i is a scalar under the n i -collinear diff and transforms covariantly under the n i -collinear local Lorentz group. So, in particular, an n i -collinear graviton must come in one of the forms (3.3), (3.4), (3.5) to be a Φ i . If derivatives are acting on an n i -collinear field, they should be included inside the Φ i . In particular, this means that the derivatives sit to the right of the Y i . That is, the derivatives act on the field first and then the Y i act. We have noted that, at the LP, all those derivatives are just ordinary derivatives ∂, not covariant derivatives, as far as gravity is concerned. We have also noted that all vierbeins inside Φ i (which are put to make it a diff scalar) should be replaced by Kronecker deltas at the LP. Therefore, there are no LP collinear graviton couplings "hiding" inside Φ i . We now have a complete gravity SCET lagrangian at the LP for our target phase space. Our derivation using power counting and symmetry shows that this lagrangian is correct at the LP to all orders in the soft and collinear gravitational couplings, 1/M Pl (λ 2 Q) and 1/M Pl (λQ), and at any desired fixed orders in the number of loops and full-theory coupling constants (e.g., 1/M Pl (Q)) at matching. Needless to say, for amplitudes (rather than the lagrangian), getting all 1/M Pl (λ 2 Q) and 1/M Pl (λQ) dependences at the LP requires evolving S hard from the hard scale ∼ Q down to the physical scale of interest, by using renormalization group (RG) equations calculated within the SCET. What is the physical scale of interest? at the LP, loops correcting the vertices in S hard are all coming from soft graviton loops (as far as gravity is concerned). The scale of virtuality in those loop integrals are hence ∼ λ 2 Q. So, the physical scale of interest is λ 2 Q. The RG evolution from Q to λ 2 Q resums a series of powers of large logarithms of the ratio of the hard to soft scales, with each logarithm multiplied by a power of the "coupling constant" Q/M Pl (λ 2 Q). Such resummation is important if λ is so small that the large logarithm log(1/λ 2 ) compensates for the suppression Q/M Pl (λ 2 Q). No logarithms of the ratio of the hard to collinear scales exist at the LP because there are no collinear graviton corrections to S hard at the LP so the 1/M Pl (λQ) dependence we get from matching is actually already correct without RG evolution. In gravity SCET, no actual calculations are necessary to see that there are no LP couplings of collinear gravitons, thanks to mode separation and the factorized gauge symmetry. Demonstrating this result in the full theory is extremely nontrivial. In a full-theory diagram where an n i -collinear graviton is attached to an n j -collinear line with j = i, it appears that the coupling is "Bigger-than-Leading" Power (BLP) because h i µν p µ j p ν j ∼ (n + i ·n + j ) 2 /λ ∼ 1/λ from the cross-collinear scaling (2.8) and the collinear graviton scaling (2.20). However, after adding up all diagrams with all possible places that this graviton can be attached to, one finds that all BLP and LP contributions completely cancel out. For example, the amplitudes in Appendices A.1-A.3 exhibit these dramatic cancellations if calculated in the full theory. Such cancellations can be demonstrated in the full theory in a full generality through a careful combinatorial analysis combined with Ward identities [2], while our gravity SCET lagrangian simply does not have any BLP or LP couplings of collinear gravitons from the outset. Thus, our EFT passes the test of manifest power counting in the sense described in Section 2.1.2. Soft/collinear theorems for gravity at the LP The LP soft theorem for gravity [1,37] [2,35,36] is completely self-evident in the EFT at the lagrangian level, without any need for analyzing amplitudes or diagrams. The structure of S hard derived in Section 3-a string of Y i 's acting on L pure [Φ 1 , . . . , Φ N ]-already has the form of "a universal soft factor ⊗ the hard interaction". (And recall that there are no LP soft couplings in S full [Φ s ].) We arrived at this structure of S hard only based on the factorized gauge symmetry and power counting. This situation is analogous to the demonstration of the soft theorem for spin-1 gauge theories by SCET [38]. This not only shows a power of EFTs but also provides us with an understanding of soft theorems in terms of purely long-distance properties of the theory, as they should be understood. The collinear theorem at the LP [1] [2] is even more trivial. There are simply no collinear graviton couplings at the LP. Physically, there is no such thing as a "graviton jet". This does not mean that there should not be any gravitons in the scattering process at the LP. A collinear sector may consist of a graviton in S pure in the purely hard limit, but it is not possible for any other graviton that is collinear to that graviton to exist at the LP. Again, we only need symmetry and power counting to see all this, which is very nontrivial from the diagrammatic perspective of the full theory as we already noted at the end of Section 3.3. Finally, the fact that collinear gravitons manifestly decouple in the lagrangian in the λ → 0 limit immediately implies the absence of collinear IR divergences from gravitational interactions in all processes in our target phase space. There is no need to analyze diagrams, because the decoupling already occurs at the lagrangian level so we cannot even conceive diagrams that might be potentially collinear divergent in gravity SCET. Put it another way, in any individual diagram of gravity SCET, every time we detach a collinear graviton line from a vertex, the λ dimension of the diagram decreases by at least one. Repeating such detaching procedure, we eventually arrive at a diagram with no collinear gravitons, which has no collinear IR divergence due to gravity. The original diagram we started with has a λ dimension higher than this, so it is obviously collinear IR finite. This should be contrasted to the QCD SCET situation in which the collinear gauge interactions are LP and thus fully remain in the effective lagrangian in the λ → 0 limit, leading to collinear divergences. On the other hand, soft graviton couplings from Y i are LP so we should expect soft IR divergences in gravity SCET from those soft graviton couplings. However, unlike in the full theory, the source of soft divergences is completely isolated in the EFT. Namely, soft divergences only appear in the matrix element of the product of Y i 's from S hard . (Recall that soft graviton couplings in S full [Φ s ] are suppressed at least by λ 2 and thus completely decouple in the λ → 0 limit.) 4 Gravity SCET at the Next-to-Leading Power (NLP) 4.1 Self-n i -collinear gravitational interactions at the NLP The argument in Section 3.1.1 suggests that there should be O(λ) couplings of n i -collinear gravitons to n i -collinear particles in S full [Φ i ]. We can obtain all such couplings by expanding S full [Φ i ] to O(λ) by following the power counting rules described in Sections 2.3, 2.9, and 2.10. Note that those couplings are suppressed not only by λ but also by Q/M Pl . We only care about λ expansion in this paper, but we would like to point out that the only invariant energy scale in S full [Φ i ] is λQ, not Q. Therefore, the validity of S full [Φ i ] as an EFT in powers of 1/M Pl only requires λQ M Pl . The hard scale Q itself can be much larger than M Pl as far as S full [Φ i ] is concerned. This would be an important point if we would like to construct a gravity SCET for scattering processes in the Regge limit. Collinear gravitational Wilson lines at the NLP As we already pointed out in Section 3.2.2, an n i -collinear operator Φ i -which may correspond to a matter particle or to a graviton in the form of the covariant gravitational objects (3.3), (3.4), (3.5)-does transform at the NLP under the n i -collinear diff×Lorentz gauge transformations (3.1). Therefore, we must make them gauge invariant also at the NLP so that we can put them in S hard . As it was realized in the original QCD SCET, the marvelous trick is to exploit the nonlocality of SCET discussed in Section 2.11.1 and use nonlocal objects, namely Wilson lines, to accomplish the desired invariance. Collinear gravitational Wilson lines for collinear local Lorentz groups Like the usual spin-1 gauge theory case, a Wilson line along a path P for the local Lorentz gauge group acting on representation r is given by where the line integral is taken along the path P withP indicating path-ordering, while Σ αβ r (α, β = 0, . . . , 3) are the Lorentz generators for the representation r satisfying the Lorentz algebra (3.2). Finally, γ µαβ is the spin connection, where our convention is such that the local-Lorentz covariant derivative is given by D µ = ∂ µ + 1 2 γ µαβ Σ αβ . We then have Now, we want one end of the Wilson line to extend to an infinity so that we can render the field it acts on to be gauge invariant. Since fields are only allowed to be displaced in the x − i coordinate in SCET, the Wilson line should be given by where is defined in (3.14) (also see footnote 17). The argument that led to the use of works just in the same way except that it is p − i , not p + i , that picks up a +i here. The extra superscript (i) of γ (i) µαβ indicates that this spin connection is made of the n i -collinear graviton field ϕ i µν , not the full ϕ µν , because this Wilson line is for the n i -collinear local Lorentz gauge group. Then, by construction, the product W i r (x) Φ (r) i (x) is invariant under the n i -collinear local Lorentz gauge group. Note that the product still transforms covariantly as the representation r under the global Lorentz group. Because of the factorized gauge symmetry structure (2.12), W i r can only act on an n i -collinear operator, never on an n j -collinear operator with j = i or on a soft operator. Let us now power-count the exponent of the Wilson line (4.3). We expect it to be NLP, but NLP does not mean that the exponent is O(λ) because W i r have different components and they scale differently. This remark also applies to the different components of Φ (r) i that W i r acts on. Again, to avoid this complication, let's boost in the n + i direction by a rapidity of ∼ log λ such that n i -collinear momenta now scale as ∼ (λ, λ, λ) i . This is a global Lorentz boost so the spin connection γ µαβ also transforms covariantly as everybody else. In this boosted frame, all components of Φ (r) i scale with a common power of λ, and all components of ϕ i µν scale as ∼ λ. Now, let's look at the exponent of (4.3). Since s parametrizes the x − i coordinate, we have ds ∼ dx − i ∼ λ −1 . The bounds of the integration, 0 and ∞, do not scale with λ. For the spin connection, according to (2.20), the terms explicitly shown in (4.2) scale as p µ p α p β /λ with an n i -collinear momentum p, and thus as ∼ λ 2 . The terms implicit in (4.2) are quadratic or higher in ϕ i µν and hence suppressed by an extra power or powers of λ, so they are subleading to the explicitly shown terms. Combining all the pieces together, we see that, in the boosted frame, the exponent of (4.3) scales as ∼ λ −1 λ 2 = λ. We therefore must Taylor-expand W i r to O(λ) and truncate the higher order terms in order to make power counting manifest in the effective action and not to include higher order terms we have no right to keep. Once Taylor-expanded and multiplied by Φ (r) i on the right, we can boost back to the original frame, which does not change the fact that the first-order term is suppressed by λ compared to the zeroth-order term, thanks to Lorentz covariance of each term. To summarize, for constructing a gravity SCET to the NLP, the n i -collinear Lorentz Wilson line is given by Collinear gravitational Wilson lines for collinear diff groups By multiplying a collinear Lorentz Wilson line (4.4), we have made every Φ (r) i in S hard invariant under all collinear Lorentz gauge groups. Let Φ i be such a collinear-Lorentz invariant object. As we already pointed out below (3.1), such Φ i is no longer invariant at the NLP under n i -collinear diff transformations but transforms as where the last term above scales as ∼ λ Φ i . So, in order to make S hard fully gauge invariant also at the NLP, we must find an object V i such that V i Φ i is invariant under the n i -collinear diff group to the NLP. Here, we do not need a superscript (r) for Φ i nor V i because the transformation (4.5) is independent of r. So, we already know that collinear graviton couplings from V i are spin independent. Again thanks to the nonlocality of SCET, we can find V i . Since (4.5) is an infinitesimal translation in spacetime, it is natural to guess that V i must schematically have the form ∼ 1 + Γ when expanded to the NLP, where Γ is the Christoffel connection. To determine the precise form, note that Γ µ νρ transforms under a general infinitesimal diff transformation δx µ = −ξ µ (x) as where higher powers of ξ are neglected but the exact dependence on the graviton field at O(ξ) is kept. Now, for the n i -collinear sector, the collinear scalings (2.20) and (2.21) tell us that the first term of the right-hand side of (4.6) scales with an n i -collinear momentum p as while all other terms on the right-hand side scale as So, at the leading nontrivial order in λ, the diff transformation (4.6) simply reduces to Thus, the integral − dx ν dx ρ δΓ µ νρ ∂ µ will give us −ξ µ ∂ µ and we can use this to cancel the unwanted term in (4.5). The integration path should be taken in the x − i direction, the allowed direction for fields to be displaced in the n i -collinear sector. And we must take care of the convergence of the integrals consistently with the +i prescription as we did for Y i and W i r . So, we find that V i should be given by , respectively. (Unfortunately, the notation we used for Y i and W i r does not work here, but the comment in footnote 17 also applies to V i .) Let's quickly power-count V i (x). The derivative ∂ µ in (4.10) acts on an n i -collinear operator so ∂ µ ∼ p µ with an n i -collinear momentum p. The s and s variables are shifts in the x − i coordinate, which is ∼ λ 0 , so they do not scale. Finally, Γ µ − i − i scales as ∼ p µ p − i p − i /λ ∼ p µ /λ as we discussed above. We thus see that V i = 1 + O(λ), as it should be. Comments on nonlocal "dressing" of operators All of our Wilson lines, Y i (x), W i r (x), and V i (x), can be thought of as particular realizations of the notion of "dressing" discussed in [14,15]. The motivation of [14,15] for dressing is to find diff invariant observables in quantum gravity that would return to the usual local operators in the limit of no gravity, and they discuss various possible forms of diff invariant dressing of local operators, including the ones very similar to, but still different from, our V i (x). In our SCET, we derived the specific forms of dressing via Y i , W i r , V i from the factorized gauge symmetry (2.12), which is "merely" an effective gauge symmetry at long distances and thus makes no reference to the ultimate nature of quantum gravity beneath the Planck length. Soft graviton couplings at the NLP Here, we would like to show, again just using symmetry and power counting, that there are no NLP couplings of soft gravitons to anything. First, as already pointed out in Section 3.1.2, there are no O(λ) soft graviton couplings to soft particles, including soft gravitons themselves. Next, for NLP couplings of soft gravitons to n i -collinear particles, we need to expand the infinitesimal soft diff×Lorentz transformation (3.6) to the NLP: where the second-to-last term is what we already had at the LP while the last term is a new term. The new term is suppressed by λ compared to the other terms, because ξ s ⊥ i ∼ λ 0 and ∂ ⊥ i Φ i ∼ λΦ i . The transformation (4.11) is still spin independent, which is why we have suppressed the superscript (r) of Φ (r) i . Now, it might appear that there should be NLP soft graviton couplings to collinear particles because Φ i does seem to transform at the NLP. However, the last term in (4.11) can actually be eliminated by exploiting RPI we mentioned in Section 2.3.2, namely, the invariance of the theory under the redefinition (or reparametrization) of the basis vectors n ± i . Now, because we have already established that there are no collinear gravitational interactions at the LP, there is only one n i -collinear particle 18 in the n i -collinear sector at the LP. So, we can always redefine the basis vectors n ± i by adding to them some vectors lying in the ⊥ i plane such that the new ⊥ i component of the momentum of this particle vanishes exactly. In such basis of n ± i , the last term in (4.11) is zero and hence the LP expression (3.7) we used in Section 3.2.3 now becomes also correct at the NLP. The transformation of the soft graviton field (3.9) we used there also remains unchanged at the NLP, because that is just the + i + i component of the transformation law (2.16), which is also correct at the NLP as we discussed at the very end of Section 2.9. Therefore, once n ± i is chosen for every i such that the ⊥ i component of the momentum of Φ i vanishes, the soft graviton couplings from Y i we obtained in Section 3.2.3 become also correct at the NLP. Other sources of NLP contributions There are other possible sources of NLP contributions to S hard : Needless to say, we now have an independent integration over the x − i coordinate of the inserted field, in addition to the x − i integration for the existing n i -collinear field. The NLP operator thus constructed has its own Wilson coefficient with dependence on the x − i coordinate of the inserted field. Of course, each of these R i − i − i and R i − i ⊥ i − i ⊥ i must be multiplied by Y i to take into account G s . (b) The replacement of the leading component of an n i -collinear field by a subleading component of the same field. An example is the replacement of P + i ψ i with P − i ψ i for a spinor ψ i . (c) The 2-graviton terms in (3.3), (3.4), or (3.5), if the LP operator already has an n i -collinear graviton in one or more of those three forms. (d) At the NLP, we must be careful about our conventions that Φ (r) i should include all derivatives acting on the "elementary" field in question and that Φ (r) i should transform covariantly under the n i -collinear Lorentz group. Since the field transforms at the NLP under the n i -collinear Lorentz group, those derivatives must be n i -collinear covariant derivatives in terms of the n i -collinear spin connection, γ (i) µαβ , which contain NLP n i -collinear graviton couplings. Here, it suffices to only include the one-graviton terms explicitly shown in (4.2) because terms quadratic or higher in the graviton field are NNLP or higher. (e) Similarly, if Φ (r) i contains vierbeins in it, they may contain NLP couplings. See the relevant discussion in Section 3.2.2. Again, we only need to expand the vierbeins to first order in the graviton field for NLP couplings. Among these, the NLP couplings from (c)-(e) are completely fixed by the diff×Lorentz gauge invariances and hence do not introduce any new parameters. The couplings via (b) are fixed by RPI and hence again introduce no new parameters. To see this, note that n i -collinear RPI may be viewed as n i -collinear Lorentz invariance with the lightcone basis vectors n ± being treated as spurions [20]. In other words, an expression is reparametrization invariant if it does not refer to n ± . So, for example, when we replace a P + i ψ i with a P − i ψ i , RPI tells us that the latter should come with exactly the same coefficient as the former, because the only Lorentz covariant combination of P + i ψ i and P − i ψ i without referring to n ± is P + i ψ i + P − i ψ i = ψ i . Similar arguments apply to fields with higher spins. It is also quite possible that the operators from (a) are also fixed by RPI, or local Lorentz symmetries with the spurions n ± . Indeed, the concrete examples discussed in Appendices A. 1-A.3 suggest that this might be the case. However, (dis-)proving the case requires an understanding of how RPI should be modified by the curvature of spacetime, at least to the NLP if not to all orders. Identifying the group of such RPI transformations and exploring its full implications clearly constitute an interesting and important problem that deserves its own separate study [39]. Another likely possibility is that the operators from (a) are not fixed but there are universality classes of gravity SCETs and the theories A.1-A.3 belong to the same universality class. This would be also extremely interesting. Next, let's examine the fact that, in terms of power counting alone, the replacement of a ∂ − i with a ∂ ⊥ i in S hard would yield an NLP contribution. However, RPI tells us that these contributions vanish in the basis described in Section 4.4 in which the ⊥ i component of the momentum of Φ (r) i vanishes. One might also wonder that, if S hard already had a ∂ ⊥ i at the LP, the subleading O(λ 2 ) fluctuations in the ∂ ⊥ i would also yield NLP contributions (see footnote 8). However, this is actually not possible. If we could eliminate terms with ∂ ⊥ i by reparametrizing n ± i , that would mean that there should be other terms in S hard that would produce ∂ ⊥ i upon such reparametrization. But reparametrization in the ⊥ i plane is O(λ). Hence, terms with ∂ ⊥ i cannot be LP but must be NLP at most, so the subleading O(λ 2 ) fluctuations in the ∂ ⊥ i would actually be NNLP effects. One might wonder if there are more subtle NLP contributions. For example, rather than displacing x µ straight to x µ + sn µ − i for the nonlocal integrations in S hard , shouldn't we displace it along some curved path? We don't have to worry about this. Since the path is straight at the LP, if it is curved at the NLP, it is in the form of a straight LP path plus an NLP deviation. So, the NLP deviation can be just Taylor-expanded around the straight path. For non-scalar fields, NLP deviations in the paths will be also accompanied by NLP twists in the directions of the fields, which can again be taken into account by derivatives and Lorentz generators acting on the fields. Therefore, the curved path effects, if any, are already implicitly included by writing down all possible derivatives and tensor structures for operators in S hard . It is tempting to wonder if the contributions of type (a) could arise from describing curved paths in an RPI manner. Possible NLP pieces in the cross-collinear scaling (2.8) can also be likewise taken into account by writing down all possible derivatives and tensor structures. Thus, the right-hand side of (2.8) can be regarded as strictly λ 0 without subleading O(λ) pieces. Similarly, we do not have to worry about subleading corrections to cross-collinear contractions of bosonic fields as well as those of fermions, (2.29) and (2.33). Recipe for constructing a gravity SCET lagrangian to the NLP Here's a complete recipe for how to construct a gravity SCET that is correct to the NLP for scattering processes in our target phase space: i (x). Recall our convention that Φ Soft/collinear theorems for gravity at the NLP From our discussions above, we see that the LP soft theorem is also correct at the NLP, provided that we regard the collinear graviton couplings of type (a) of Section 4.5 as belonging to the "hard" part of the soft theorem. But it should be emphasized that this conclusion relies on the use of RPI discussed in Section 4.4, which in turn relies on the assumption of our target phase space that the angles of collinear splittings due to non-gravitational interactions are hierarchically different from those due to gravity. Relaxing this assumption and studying how the soft theorem should be modified at the NLP is an important and interesting problem. In contrast, the decoupling of collinear gravitons no longer holds at the NLP. Namely, we have a new collinear graviton in one of the collinear sectors. However, in order to make a collinear theorem, that is, to make some universal statements, we need to be able to say that the contributions from operators of type (a) of Section 4.5 are fixed by RPI or something, or show that there are more than one universality class and classify all possible universality classes. We defer this very important and interesting question to our future publication [39]. However, even without any universal theorem, the structures of LP and NLP operators dictated by the symmetry and power counting of the EFT dramatically simplify the calculations of NLP contributions to amplitudes. In the EFT, we only need to calculate one term from each of NLP operators of type (a) of Section 4.5 to determine its Wilson coefficient, and then we will have the entire NLP amplitudes. In contrast, in the full theory, one needs to calculate every single NLP term, which is possible only after carefully and laboriously expanding each vertex and each propagator of each diagram down to NLP starting, typically, at BLP. One can see this contrast even in the simple examples in Appendices A.1-A.3. Comparison with the literature Ref. [13] made the first attempt to construct SCET for gravity at the LP, where the main thrust of the paper is to provide a SCET-like demonstration of the decoupling of BLP collinear graviton couplings, although it also discusses the soft sector. However, while their decoupling argument itself may be valid, the structure of their effective lagrangian prior to the decoupling, viewed as a SCET, has the following problems. They introduce a collinear Wilson line of the form (in our convention): As can be seen from the absence of a Lorentz generator above, this Wilson line is for the diff part of diff×Lorentz, like our V i . In Ref. [13], V is multiplied onto fields in different collinear sectors from the sector that the h −− inside V (x) belongs to, thereby describing BLP couplings of collinear gravitons in our language. In fact, the core of the paper is to show that V , and hence the BLP couplings, can be completely removed from the lagrangian by a coordinate transformation or equivalent field redefinition. In our viewpoint, there are four problems here. First, unlike our V i , V does not transform under diff as a Wilson line should. One can check this explicitly, but also recall our analysis of diff invariance in Section 3.2.3 that determines the diff Wilson line uniquely to be V i , not V . Therefore, neither the form of V itself nor the rule for how V should enter the lagrangian is based on symmetry but one needs to look at full-theory diagrams to see those. Second, even if it did have the right transformation property as a Wilson line within the collinear sector that V belongs to, the factorized gauge symmetry would forbid V to be multiplied onto a field in a different collinear sector. So, there would be no need to go through a coordinate transformation or field redefinition to decouple V (x) from the lagrangian, because it could not be written down in the first place. Third, given that it had been already known from full theory analyses [2] that BLP and LP collinear graviton couplings should always cancel, the right EFT with manifest power counting and the right effective symmetries should exhibit their absence from the outset. Indeed, in our gravity SCET, the largest couplings of collinear gravitons ever permitted by symmetry are NLP. In contrast, the theory of Ref. [13] apparently permits BLP couplings via V to be written down, although they have clear arguments for why there must exists a coordinate transformation that removes such couplings, and also show an explicit expression for field redefinition that removes V . While those arguments and field redefinition themselves may be valid, they do not constitute a SCET demonstration of the decoupling. Finally, while our analysis shows the absence of both BLP and LP couplings, Ref. [13] does not discuss the decoupling of LP collinear graviton couplings, which in particular requires to discuss not only h −− but also h −⊥ components. On the other hand, for the soft Wilson line Y i , we agree on its final form with Ref. [13] as well as the older, full-theory studies [2,35,36]. In Ref. [13], after the soft Wilson line is introduced it is verified that it has the right transformation property as a Wilson line. Our more deductive derivation in terms of symmetry also makes clear the uniqueness of its form as well as the fact that it does not get modified at the NLP as discussed in Section 4.4. Conclusions and future directions In this paper, we have identified fundamental building blocks of gravity SCET and laid out a procedure for writing down the effective lagrangian for any given full theory at the leading power and the next-to-leading power for processes that belong to our target phase space. In particular, we identified basic building blocks of the EFT, the most notable of which being the soft, collinear Lorentz, and collinear diff Wilson lines: Y i , W i r , and V i , respectively. The soft theorem and the decoupling of collinear gravitons at the leading power are structurally manifest at the lagrangian level in the gravity SCET. Permeating and underlying all of our analyses are mode separation, symmetry-especially the factorized gauge symmetry-and power counting. (But one should recall that factorized gauge symmetry is a consequence of mode separation, which we need for manifest power counting, which is compulsory for an EFT to systematically control the kinematics of its target phase space.) All results and claims in this paper are derived from those principles without recourse to diagrammatic analyses. The gravity SCET lagrangian thus constructed is now ready for perturbative calculations with effective symmetries and power counting maximally manifest unlike calculations in the full theory, which greatly facilitate the calculations. There are some obvious variations of our gravity SCET. First, the target phase space may be modified. As we alluded to in Introduction, if we relax our assumptions that all particles have no or negligible mass compared to the soft scale λ 2 Q, we will need to adapt our path of building gravity SCET for massive particles. Or, if we relax the assumption that non-gravitational collinear splittings occur with a much larger or smaller "λ" than the λ, we will need to simultaneously include all gauge groups in the factorized effective symmetry and introduce corresponding collinear and soft Wilson lines for all of them. Being an EFT for highly energetic collinear particles, SCET can also be useful for studying the amplitudes of extremely energetic forward scattering (the Regge limit). The Regge limit may give us another interesting channel toward understanding gravity [40][41][42][43][44][45][46][47]. In QCD SCET, it has been established [31][32][33] that an additional mode called the Glauber mode is necessary in such region for a consistent SCET, and we expect that our construction of gravity SCET can be suitably modified for the Regge limit by adapting the framework of [33] for gravity. As discussed in [33], rapidity renormalization group [48][49][50][51] plays an important role. Since we have adopted the position-space formulation of SCET for our gravity SCET as opposed to the label SCET formalism as in [33], it may be hoped that a simplification in dealing with rapidity renormalization group may be achieved by adapting the analytic regulator method (originally proposed by [52] with a fully consistent perturbative treatment by [53]) for the Regge region and, most importantly, for curved spacetimes. Especially, it must be checked whether a consistency of analytic regularization as shown for QCD SCET [52] holds for gravity SCET or not, especially in the Regge region. Within or away from our target phase space, the study of RPI in the presence of gravity is important and interesting [39]. As we pointed out above, such study should constitute an essential part of establishing soft and collinear theorems for gravity beyond the leading power in λ. Especially, the concrete examples discussed in Appendices A.1-A.3 hint at uncovered relations that relate the coefficients of operators of the type (a) (see Section 4.5) to those of LP operators. If this relation is due to RPI, that would mean that the RPI transformations in gravity SCET should depend on graviton fields. This is not unexpected from the viewpoint that n i -collinear RPI transformations are the n i -collinear local Lorentz transformations that are "broken" by the introduction of n ± i and the choice of an n i -collinear freely falling local inertial frame should depend on the n i -collinear graviton field configuration. Unlike in the case of QCD SCET, RPI in gravity SCET is an aspect of the factorized gauge symmetry, especially the Lorentz part of diff×Lorentz, so it should inevitably involve the graviton field. It is also possible that the relations hinted at by the calculations in Appendices A.1-A.3 are not due to RPI but instead are an indication of the existence of a "universality class" of gravity SCET. Then, classifying all possible universality classes of gravity SCET would be an interesting problem. Finally, as we alluded to in Introduction, it is interesting to study possible relations between infinite dimensional symmetries suggested by gravity SCET and those of the full theory discovered by [3][4][5]. In this regard, Ref. [38] already notes a connection between the RPI in QCD SCET and the Möbius group. We also noted that an EFT is "half way" between the full theory lagrangian and the S-matrix elements in that the path integrals are partly done by integrating out the modes outside the target phase space. So, it may be also possible to find an EFT that captures, at the lagrangian level, some of the amazing properties of the gravitational S-matrix such as "gravity = gauge 2 " [6,7]. In particular, it would be nice to find a logic of constructing an EFT that "automatically" leads to the double-copy symmetry structure discovered ingeniously by Ref. [54] within the full thoery. We hope that our derivation of gravity SCET might serve as a useful prototype or guide for further development of EFTs as a means to explore gravity amplitudes. A Appendices: Explicit Examples In the following appendices, we consider examples with four collinear sectors in three different full theories, and compare tree-level full-theory amplitudes with the corresponding SCET amplitudes at the LP and NLP. For simplicity, we ignore the soft sector. We will see how the collinear Lorentz and diff Wilson lines ((4.4) and (4.10)) as well as the Ricci and Riemann tensors ((3.4) and (3.5)) appear in effective SCET operators. The examples below also demonstrate a practical power of gravity SCET. The amplitude of each example would require lengthy calculations with laborious expansions in λ to the NLP followed by tricky cancellations of a large number of BLP and LP terms. In contrast, the SCET directly gives us the NLP amplitude in terms of a few coefficients of effective operators, which also tells us which full-theory amplitude we should handpick to determine those few coefficients with the least effort. A.1 The φ 4 full theory Here, we consider a full theory with one massless real scalar φ coupled to gravity: where R is the Ricci scalar and we have set M Pl = 1. We will first consider a purely hard process φ 1 φ 2 → φ 3 φ 4 (where φ i is n i -collinear by definition) at tree level and construct a corresponding purely hard SCET operator. We will then consider a process φ 1 φ 2 → φ 3 φ 4 g 1 where g 1 is an n 1collinear graviton. We will calculate the amplitude of this process at tree-level at the LP and NLP in the full theory, and see how that is reproduced from the LP and NLP pieces of the SCET lagrangian. As described in Section 3.2.1, the first step is to construct S pure = d 4 x L pure . For the process φ 1 φ 2 → φ 3 φ 4 , the form of L pure to be matched is L pure (φ 1 , . . . , φ 4 ) = ds 1 ds 2 ds 3 ds 4 C(s 1 , s 2 , s 3 , s 4 where x i ≡ x + s i n − i . Matching the amplitude from L pure to that from the full theory (A.1) at tree level, we get C(s 1 , s 2 , s 3 , s 4 ) = −κ δ(s 1 ) δ(s 2 ) δ(s 3 ) δ(s 4 ) . (A.3) (So, this is a special case where L pure happens to be local.) Let us now turn on gravity and consider the process φ 1 φ 2 → φ 3 φ 4 g 1 at tree level. In the full theory, there are two sources of one-graviton couplings: √ −g = 1 + h/2 + O(h 2 ) and g µν = η µν − h µν + O(h 2 ). Thus, the relevant interactions in the full theory are given by Therefore, in the process φ 1 φ 2 → φ 3 φ 4 g 1 , the graviton g 1 can be emitted from any of φ 1 , . . . , φ 4 legs or from the φ 4 vertex (see Fig. 1). As discussed in Section 2.6, the amplitude of g 1 emitted from φ 1 (Fig. 1(a)) is the same in the full and effective theories, because it is a process occurring within the n 1 -collinear sector alone. Hence, we compare the full-theory amplitude from all the other diagrams with the SCET amplitude from L hard . In the full theory, the diagrams with g 1 emitted from the φ 2,3,4 legs ( Fig. 1(b) and two similar diagrams with g 1 emitted from φ 3,4 ) individually contain BLP and LP graviton couplings. But when we carefully expand the amplitudes in powers of λ and add them together, all the BLP and LP terms cancel out and we are left with NLP contributions as we expect from the EFT. The diagram with g 1 emitted from the φ 4 vertex (Fig. 1(c)) is already NLP by itself. At the end of the (very long) day, the full-theory amplitude for φ 1 φ 2 → φ 3 φ 4 g 1 with g 1 emitted from φ 2,3,4 or φ 4 has the form iM 1 + iM 2 + iM 3 with In these expressions, all lightcone indices −, +, ⊥ refer to the n 1 -collinear coordinates, and similarly h µν is the n 1 -collinear graviton field, treated as an external field and not necessarily on-shell. All the matter particles, on the other hand, are taken to be on-shell. A " " on an index (such as α) means that the index only refers to the ⊥ components in the n 1 -collinear coordinates. The momenta of φ 1,2,3,4 and g 1 are denoted by p 1,2,3,4 and q, respectively, where p 1,2 are ingoing while p 3,4 and q outgoing. As noted above, every and each term in iM 1,2,3 is NLP. Let us now ask where iM 1,2,3 come from in the SCET. First, as described in Section 4.6, each φ i (x) in L pure should be replaced by φ i (x) ≡ V i (x) φ i (x), where V i (x) is the n i -collinear diff Wilson line (4.10). (Recall that we are ignoring the soft sector for simplicity here, so there's no Y i .) Let us refer to this part of L hard as L hard-1 , i.e., L hard-1 (φ 1 , . . . , φ 4 , h) = L pure (φ 1 , . . . , φ 4 ) . (A.9) One can verify that the n 1 -collinear graviton couplings from expanding L hard-1 to the NLP exactly reproduce iM 1 . Next, the full-theory amplitude iM 2 is reproduced in the SCET by an operator L hard-2 containing an n 1 -collinear Riemann tensor R 1 µνρσ , 19 where L hard-2 = ds 1 · · · ds 4 ds 5 −s 2 (A.10) Of course, this expression needs to be expanded to the NLP. In doing so, the µ and ν indices only need to be in the ⊥ 1 directions, as we described below (3.5). Together with the n α − 1 n β − 1 , we will be picking up only the R 1 −⊥−⊥ component. Since this is already O(λ), all the φ i 's in (A.10) should be replaced by the respective φ i 's. Finally, the full-theory amplitude iM 3 is reproduced in the SCET by an operator L hard-3 containing an n 1 -collinear Ricci tensor R 1 µν , where L hard-3 = ds 1 · · · ds 4 ds 5 (A.11) Again, since n α − 1 n β − 1 R 1 αβ is already O(λ), all the φ i 's should be replaced by the respective φ i 's. A.2 The φ 3 full theory Here we give another example, repeating the same process above but this time with a φ 3 interaction in full theory. First, for φ 1 φ 2 → φ 3 φ 4 without graviton emissions, the overall form of L pure remains as in (A.2), but the Wilson coefficient C in (A.3) is now replaced by C : C (s 1 , s 2 , s 3 , s 4 ) = κ 2 θ(−s 1 ) 2 θ(−s 2 ) δ(s 3 ) δ(s 4 ) n + 1 · n + 2 − δ(s 2 ) θ(s 3 ) δ(s 4 ) n + 1 · n + 3 − δ(s 2 ) δ(s 3 ) θ(s 4 ) n + 1 · n + 4 (A. 13) where the three terms in C correspond to the s-, t-, and u-channel exchange diagrams of the full theory, respectively. We see that L pure displays a characteristic nonlocality of SCET, where the nonlocality comes from integrating out the highly off-shell virtual φ because such highly off-shell modes are not degrees of freedom of the SCET, unlike the collinear and soft modes. The signs of the arguments of the step functions above are chosen to ensure the integration over the respective s i converges with the usual prescription of adding i to the energy with a positive infinitesimal . Next, for φ 1 φ 2 → φ 3 φ 4 g 1 , direct calculations show that the full-theory amplitudes iM 1,2,3 of (A.6)-(A.8) are replaced in the φ 3 theory by iM 1,2,3 : Figure 2: A full-theory diagram with the graviton emitted from the internal propagator in the φ 3 theory. where A 1,2,3 are defined in (A.6)-(A.8), and the Mandelstam variables are defined as s q = 2(p 1 −q)·p 2 , t q = −2(p 1 −q)·p 3 , and u q = −2(p 1 −q)·p 4 with s 0 , t 0 , u 0 being s q , t q , u q with q = 0, respectively. One can verify that the full-theory amplitudes iM 1,2,3 are reproduced in the SCET by the operators L hard-1, 2, 3 in (A.9)-(A.11) with C replaced by C , where the dependencies on the Mandelstam variables in (A.14) all come from the step functions in C . The fact that C is just replaced by C is expected for the contributions from L hard-1 because L hard-1 is literally just L pure with each φ i replaced by φ i = V i φ i . It may be surprising for L hard-2, 3 in that those operators are gauge invariant by themselves so they are not related by gauge symmetry to L pure . We even have diagrams like Fig. 2 that do not have the 1/s + 1/t + 1/u structure. There are two possible stories after this. One is that the φ 4 and φ 3 theories above belong to the same "universality class" of gravity SCET, so we only need to calculate one theory to fix the structures and coefficients of L hard-2, 3 . The other possibility is that there is a symmetry-perhaps RPIthat actually completely fixes the structures and coefficients of L hard-2, 3 once L pure is given. Both possibilities are interesting and deserve a further dedicated study [39]. A.3 The φφψψ full theory Our third example is meant to pick up the collinear Lorentz Wilson line W i r of (4.4). We also see RPI plays the role of type (b) of Section 4.5. Consider the full theory given by where the covariant derivative, D µ , is defined around (4.2). We then consider the processes ψ 1 φ 2 → ψ 3 φ 4 and ψ 1 φ 2 → ψ 3 φ 4 g 1 as we did for the φ 4 and φ 3 theories. First, for ψ 1 φ 2 → ψ 3 φ 4 without a graviton emission, the form of L pure remains as in (A.2) except for the replacements φ 1 → (P + 1 ψ 1 ) a and φ 3 → (P + 3 ψ 3 ) a = (ψ 3 P − 3 ) a with the the helicity projection operators (2.27), where a is a Dirac spinor index that is summed over, which leads to the spinor structure ψ 3 (x 3 )P − 3 P + 1 ψ 1 (x 1 ) in L pure . The coefficient C remains as in (A.3). Now consider ψ 1 φ 2 → ψ 3 φ 4 g 1 in the full-theory. The amplitudes iM 1,2,3 of (A.6)-(A.8) are now replaced with iM i = −iκ (ψ 3 P − 3 P + 1 ψ 1 )A i (i = 1, 2, 3) (A. 16) with A 1,2,3 given by (A.6)-(A.8). Here, ψ 1,3 denote appropriate on-shell spinor wave functions, not the field operators, but there should be no confusion. Most importantly, there is a new term in the amplitude: where cancellations of BLP and LP contributions are already taken care of, so every term in this expression is NLP. On the EFT side, L hard-1 has the same form as (A.9) except for the replacements φ 1 → (ψ 1 ) a and φ 3 → (ψ 3 ) a , where a is a Dirac spinor index that is summed over, and ψ i (i = 1, 3) now also include a Lorentz Wilson line: where W i D (x) is the n i -collinear Lorentz Wilson line (4.4) for the Dirac spinor representation. Also, note the absence of helicity projection operators in the replacements φ 1 → (ψ 1 ) a and φ 3 → (ψ 3 ) a . This is to take into account constraints from RPI (item (b) of Section 4.5). Then, one can verify that the terms coming from the diff Wilson lines in L hard-1 reproduce the amplitude iM 1 as before. Most importantly, the terms from the n 1 -collinear Lorentz Wilson line in L hard-1 exactly reproduce iM 4 . Finally, the amplitudes iM 2,3 are reproduced from L hard-2, 3 of (A.10) and (A.11) with the same C but with the replacements φ 1 → (P + 1 ψ 1 ) a and φ 3 → (ψ 3 P − 3 ) a with a summed over. Again, we observe the same clear pattern as we discussed in Appendix A.2.
2018-02-16T14:38:29.000Z
2017-10-20T00:00:00.000
{ "year": 2017, "sha1": "95f9d50552da30014764cd91a09150a495adcfb3", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.97.066011", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "95f9d50552da30014764cd91a09150a495adcfb3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
27034992
pes2o/s2orc
v3-fos-license
Unsupervised Representation Learning with Laplacian Pyramid Auto-encoders Scale-space representation has been popular in computer vision community due to its theoretical foundation. The motivation for generating a scale-space representation of a given data set originates from the basic observation that real-world objects are composed of different structures at different scales. Hence, it's reasonable to consider learning features with image pyramids generated by smoothing and down-sampling operations. In this paper we propose Laplacian pyramid auto-encoders, a straightforward modification of the deep convolutional auto-encoder architecture, for unsupervised representation learning. The method uses multiple encoding-decoding sub-networks within a Laplacian pyramid framework to reconstruct the original image and the low pass filtered images. The last layer of each encoding sub-network also connects to an encoding layer of the sub-network in the next level, which aims to reverse the process of Laplacian pyramid generation. Experimental results showed that Laplacian pyramid benefited the classification and reconstruction performance of deep auto-encoder approaches, and batch normalization is critical to get deep auto-encoders approaches to begin learning. INTRODUCTION Real world objects are meaningful only at a certain scale. You might see an apple perfectly on a table. But if looking at the earth, then it simply does not exist. This multi-scale nature of objects is quite Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). common in nature. Scale-space theory is a framework for early visual operations with complementary motivations from physics and biological vision, which has been developed by the computer vision community to handle the multi-scale nature of image data [20]. It is a formal theory for handling visual structures at different scales, by embedding the original image into a one-parameter family of derived images, in which fine-scale structures are successively suppressed. Scale-space representation has a wide application in computer vision. For example, the scale-invariant feature transform (SIFT) [21], a successful hand-crafted feature in computer vision to detect and describe local features in images, includes an important stage of key localization, which is defined as minima and maxima of the result of difference of Gaussians (DoG) function applied in scale space to a series of resampled and smoothed images. In consideration of the successful applications of scale-space representation in hand-crafted feature engineering, it's reasonable to apply it in unsupervised representation learning, especially nowadays when supervised deep learning methods have achieved great success in many tasks, owing to its ability to learn features from raw pixels. Recent work (DeCAF) [7] has shown that strong generic feature representations can be extracted from the activation of pretrained networks. DeCAF defined a new visual feature by concatenating the flattened activations of each layer in the pre-trained networks, which is learned on a set of pre-defined object recognition tasks. This feature has shown strong generalization ability when it's applied to new tasks, which suggests that there exists a generically useful feature representation for natural visual data. However, training deep models in a supervised way needs millions of semantically-labeled images which cost lots of manual work. Collecting large labeled datasets is very difficult, and there are diminishing returns of making the dataset larger and larger. Hence, unsupervised representation learning has drawn lots of attention for quick access to arbitrary amounts of data, despite its performance is still limited so far. The most common method used in unsupervised representation learning is an auto-encoder which learns representations based on an encoder-decoder paradigm. An auto-encoder (AE) [3] is an artificial neural network used for unsupervised learning of efficient coding. It consists of two parts, an encoder which outputs a hidden representation and a decoder which attempts to reconstruct the input from the hidden representation. In this paper we propose Laplacian pyramid auto-encoders (LPAE), a straightforward modification of the deep convolutional auto-encoder architecture, for unsupervised representation learning. The motivation for LPAE originates from two aspects: (1) There is a basic observation that real-world objects are composed of different structures at different scales. This implies that real-world objects may appear in different ways depending on the scale of observation. Hence, learning feature representations at multiple scales can make learning system robust to the unknown scale variations that may occur. (2) Auto-encoder uses a bottle-neck mechanism for forcing model abstraction, which can prevent a trivial identity mapping from being learned. The bottle-neck mechanism leads to an inherent tension: the greater the forced abstraction, the smaller the information content that can be expressed. Laplacian pyramid provides a framework for multi-path deep autoencoders where each path can focus on a difference image which preserves part of the original image information. By mimicking the recovering process of the original image in Laplacian pyramid, a hierarchical encoding strategy is used to aggregate the information content expressed by each encoding path. This strategy would improve the learning efficiency of the whole model, enabling it to preserve information content as much as possible while avoiding a trivial identity mapping. A typical architecture of LPAE is shown in Figure 1. LPAE is different with the traditional auto-encoder that tries to reconstruct its own inputs. LPAE uses multi-path auto-encoders to reconstruct the Gaussian pyramid from the Laplacian pyramid. Each path has connections with next level, which enables a hierarchical encoding strategy mentioned above. RELATED WORK Unsupervised representation learning, aiming to use data without any annotation, is a fairly well studied problem in machine learning community. Examples include dictionary learning [19], independent component analysis [13], auto-encoders [3], matrix factorization [26], and various forms of clustering [11]. We can use K-means algorithm to group an unlabeled data set into k clusters, whose centroids can be used to produce features [6]. Unsupervised dictionary learning exploits the underlying structure of the unlabeled data to optimize dictionary elements. An example of unsupervised dictionary learning is sparse coding, which aims to learn sets of over-complete bases to represent data efficiently [19]. Recently deep learning methods trained in a supervised way have dramatically improved the state of the art performance on a variety of computer vision tasks. Since supervised deep learning model is capable of learning high-performance visual representations, what about unsupervised deep learning model? Exemplar CNN [8] proposes a method for training CNNs [18] through a surrogate task automatically generated from unlabeled images. DCAGN [23] identified a family of CNN architectures suitable for the adversarial learning framework (GAN) [9] which has a wide application in image generation. Another popular method is to train auto-encoders that learns representations based on an encoder-decoder paradigm. Denoising auto-encoders [28] tries to reconstruct the input from a corrupted version of it, which make the hidden layer discover more robust features. Sparse auto-encoders can learn useful structures in the input data by imposing sparsity on the hidden units during training. Sparsity may be achieved by regularization terms in the loss function [22]. Contractive auto-encoder [25] adds a regularization term in their loss function that makes the model robust to slight variations of input values. By making strong assumptions concerning the distribution of latent variables, variational auto-encoders [16] inherit auto-encoder architecture for learning latent representations. Stacked what-where auto-encoder [30] attempts to learn a factorized representation that encodes invariance and equivariance, and leverage both labeled and unlabeled data to learn this representation in a unified framework. The ladder network [24] contains several lateral shortcut connections from the encoder to decoder at each level of the hierarchy, and the lateral shortcut connections allow the higher levels of the hierarchy to focus on abstract invariant features. APPROACH The scale-space representation we use is the Laplacian pyramid [4]. After reviewing this, we introduce our LPAE model which integrates multiple deep CAEs into the framework of a Laplacian pyramid. Laplacian Pyramid The Laplacian pyramid is a linear invertible image representation consisting of a set of band-pass images, spaced an octave apart, plus a low-frequency residual. The first step in Laplacian pyramid coding is to low-pass filter the original image 0 to obtain image 1 , which is considered a "reduce" version of 0 since both resolution and sample density are decreased. In a similar way we form 2 as a reduced version of 1 , and so on. Filtering is performed by a procedure equivalent to convolution with one of a family of local, symmetric weighting functions. An important member of this family resembles the Gaussian probability distribution, so the sequence of images [ 0 , 1 , ..., n ] is called the Gaussian pyramid. Suppose we have selected the 5-by-5 generating kernel w, the level-to-level averaging process is performed by the function REDUCE as below: where i and j denote the coordinate of the pixel. We define a function EXPAND as the reverse of the function REDUCE. Its effect is to expand an (M + 1)-by-(N + 1) image into a (2M + 1)-by-(2N + 1) image by interpolating new node values between the given values. Thus, the expand function applied to image l of the Gaussian pyramid would yield an image ′ l which is the same size as l −1 . Only terms for which i −m 2 and j−n 2 are integers are included in this sum. The Laplacian pyramid is a sequence of difference images [l 0 , l 1 , ..., l n ]. Each is the difference between two levels of the Gaussian pyramid. Thus, for 0 < 1 < n: since there is no image n+1 to serve as the prediction image for n , we say l n = n . Laplacian Pyramid Auto-encoders Suppose we have a Laplacian pyramid [l 0 , l 1 , ..., l n ] and the corresponding Gaussian pyramid [ 0 , 1 , ..., n ], the aim of our model is to learn a family of hidden representations for the Laplacian pyramid, which can be used to reconstruct the corresponding Gaussian pyramid. A typical architecture of LPAE is shown in Figure 1. We use E k () and D k () to denote the encoding network and decoding network at level k, separately. The hidden representation h k is the output of E k (). For each sub-network, the loss function is as below. And the total loss is the sum of losses at all levels. Details of the Network Architecture As shown in Figure 2, we use a CNN to encode the input, and employ a deconvolutional net (Deconvnet) [29] to produce the reconstruction at each level. The numbers in each cell denote the size of receptive field, number of feature maps and stride. For example, "3*3*64, 1" at the top left means a convolutional layer with 3-by-3 receptive field, 64 feature maps and a stride of 1 pixel for Conv. K-means Network [5] 60.1% 82.0% HMP [2] 64.5% -View-Invariant K-means [12] 63.7% 81.9% Exemplar CNN [8] 74.2% 84.3% SWW Auto-encoder [30] 74.3% -Supervised state of the art 70.1% [27] 96.5% [10] each dimension of input. All convolutional layers and deconvolutional layers use ReLU nonlinearity, which is omitted in the notation. No fully connected layer has been used, which helps handle input data of different size. Each layer is followed by a batch normalization layer. Batch normalization (BN) layer [14] is important for the training of deep models based on the CAE, and we give practical proof in the experimental results. We up-sample the outputs of each CNN, and concatenate them with feature maps of a convolutional layer in the next level. This data flow aims to reverse the process of Laplacian pyramid generation. EXPERIMENTS To compare our approach to deep CAEs and other unsupervised feature learning methods, we report classification results on the STL-10 [6] and CIFAR-10 [17]. Datasets STL-10 contains 96x96 pixel images and relatively less labeled data (5,000 training samples, 100,000 unlabeled samples and 8,000 test samples). It is especially well suited for unsupervised learning as it contains a large set of 100,000 unlabeled samples. In all experiments, we trained our model and Deep CAEs from the unlabeled subset of STL-10, and used the encoding parts as generic feature extractors. The CIFAR-10 dataset consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. Since the resolution of CIFAR-10 images is low, we only evaluated 2-scales LPAE on CIFAR-10. Experimental Setup To make a thorough evaluation of our model, we worked with three network architectures of different scales. We have shown the network architecture of 4-scales LAPE in Figure 2. By removing the level 3 of 4-scales LAPE, we get 3-scales LAPE. Likewise, we can get 2-scales LAPE. We use deep CAEs as baselines, and the architectures are shown in Table 1 and Table 2. Each layer is followed by a batch normalization layer. No pre-processing was applied to training images besides ZCA whiting. All models mentioned above were trained with mini-batch Adaptive Moment Estimation (Adam) [15] with a mini-batch size of 50. All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. Learning rate was set to 0.001 in all models. All models were implemented in TensorFlow 1.3 [1]. At test time we applied the encoding network of each model as a generic feature extractor. To the feature maps of each layer we applied the max-pooling method that is commonly used for STL-10 and CIFAR-10 dataset. The pooled features were then flattened into vectors, and we trained a softmax classifier on these feature vectors. For all models, max-pooling results in 16 or 9 values per feature map. Classification Performance In Table 3 we compare LAPE to several unsupervised feature learning methods, including the current state of the art on each dataset. We also list the state of the art for methods involving supervised feature learning (which is not directly comparable). In Table 3 we reported the best performance of LAPEs and Deep CAEs we have achieved in the experiments. Figure 3 plots the performance of LAPEs and Deep CAEs (with or without BN layers) against the number of epochs. Observations are as follows. First, LPAE methods outperformed deep CAEs which didn't consider the scale-space representation. Second, LPAE methods didn't achieve the state of the art, but it still outperformed several baselines on STL10. LPAE methods performed poor on CIFAR10, which is likely due to the low resolution of CIFAR10 images. Apparently, low resolution fails to provide significant scale-space information. Third, the performance of LPAE didn't increase with the number of scales. In our opinion, this result indicates that determining the number of scales in LPAE needs the empirical knowledge. Fourth, the performances of LPAEs and deep CAEs achieved the best very fast and stayed stable after 10 epochs. Fifth, BN has a very important influence on the performance of LPAEs and deep CAEs. The result at epoch 0 in Figure 3 is the performance of random filters. It's clear that LPAEs and deep CAEs perform worse than the random filters without BN layers, and using BN layers can lead to a drastic difference of performance. Reconstruction Loss It's clear that the Laplacian pyramid and BN layers have important influences on the classification performance. Thus, it's reasonable to study their influences on the reconstruction. Figure 5 and Figure 6 plot the reconstruction loss of LPAEs at the bottom level. In Figure 6 we compared LPAEs with deep CAEs. Apparently, LPAEs performed much better than deep CAEs on image reconstruction, indicating that LPAEs can express more information content than deep CAEs. Meanwhile, LPAE methods performed better than deep CAEs on the classification tasks, indicating that LPAEs achieved better representation abstraction. These experimental results support the viewpoint we mentioned in the introduction section. These results also suggests that LPAE is much more suitable than deep CAE for image generation tasks. Figure 4 and Figure 5 show the influence of BN layers on reconstruction. Removing layers leads to failure of training, except for deep CAE II. Apparently the backpropagation couldn't work in the training of LPAEs and deep CAE I without BN layers, which also explained their poor classification performances after removing BN layers. LPAE model has multiple input and output paths, which increases the difficulty of training. Using BN layers led to a drastic drop in reconstruction loss and helped stabilize training. Deep CAE II showed a different behavior from other models after removing BN layers. Using BN or not has no influence on its reconstruction performance. Let's go back and look at its classification performance. From Figure 3 we can see that its accuracy curve first went up and then went down after removing BN layers, which was different from other models. In our opinion this observation suggested that the deeper architecture helped training and resulted in a relatively good reconstruction performance, but it's unstable. One possible explanation is that BN helps gradient flow in deeper models. Overall, BN is critical to get deep networks to begin learning. CONCLUSIONS In this paper we embed deep auto-encoders into the framework of Laplacian pyramid, and apply the model to unsupervised representation learning. Experiments have shown some interesting results which benefit the research and practical applications of deep autoencoders approaches.
2018-01-16T14:59:05.000Z
2018-01-16T00:00:00.000
{ "year": 2018, "sha1": "b930b08656ac620082b1518cfbff6f7259dbcad9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1801.05278", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b930b08656ac620082b1518cfbff6f7259dbcad9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
236782537
pes2o/s2orc
v3-fos-license
Research advances of microbial denitrification and application in black and odorous water High nitrogen content is considered to be one of the main reasons for the black and odor phenomenon in rivers. Microbial denitrification has been widely concerned because of its simple operation, high economic efficiency, short repair time and little impact on the surrounding environment. However, its denitrification process is also affected by environmental factors, pollutants and changes in microbial communities. In this paper, the main bacterium participating in sewage treatment of nitrification, denitrification and anaerobic ammonia oxidation were introduced, and then the adaptation situation and distribution of microbial community in each denitrification process were summarized. Finally, applications and prospects were objectively provided by microbial agents, constructed wetlands and ecological floating islands. According to the existing research results, it is believed that microbial remediation has a broad prospect in the treatment of urban black and odorous water bodies. However, it is difficult to maintain a stable bacterial community structure, denitrification activity and environmental adaptability of microbial remediation technology in river channels, which is the bottleneck of its application in the treatment of black and odorous water. Introduction At present, there is no clear definition of black and odorous water in the world. It is defined as "the generic term for water bodies showing unpleasant color and emitting unpleasant smell" in the Guidelines for the treatment of urban black and odorous water body by the Ministry of Housing and Urban-Rural Development (MOHURD) of the People's Republic of China. High nitrogen content is considered to be one of the main causes for black and odorous pollution in rivers, and the way of denitrogenation in urban rivers has received extensive attention in recent years [1]. Microorganisms exist widely in river ecosystems and play a key role in the geochemical process, especially in the transformation of nutrients (such as nitrogen, phosphorus and sulfur). It is found that the self-purification function of river ecosystems can remove nitrogen pollutants through the processes of ammoniation, nitrification, denitrification and anaerobic ammonia oxidation driven by microorganisms [2]. Microbial remediation is to achieve the purpose of water purification on the basis of the biological metabolism of pollutants by microorganisms. It has the advantages of simple operation, economic efficiency, short remediation time and little impact on the surrounding environment. However, the process of microbial denitrogenation is also affected by environmental factors, pollutant types and changes in microbial communities [2][3]. The flow velocity of urban rivers and the pollutants received will affect the oxygen demand of microorganisms in sediments, as well as the structure and integrity of eukaryotes and bacterial communities [4]. Therefore, unlike wastewater treatment plants, the microbial action of river ecosystems is affected by more factors. Researchers 2 of denitrogenation microorganisms in wastewater treatment from various aspects, and also had some practices in water ecological management. In this paper, the characteristics of nitrogen cycling microorganisms in sewage treatment, the selection effect of environmental factors on nitrogen cycling microorganisms, and the application of microorganisms in water environment treatment practice were summarized, hoping to provide reference for the treatment of black and odorous water by microbial remediation technology. Influencing factors of microbial nitrogen cycling process The common nitrogen cycling processes in water mainly include nitrification, denitrification and anaerobic ammonia oxidation (ANAMMOX). Among them, nitrification process includes aerobic ammonia oxidation and nitrite oxidation processes, and denitrification process and ANAMMOX process are collectively referred to as "nitrate dissimilation reduction process". The activity of microorganisms involved in nitrogen cycle is easily affected by environmental factors. At present, it is found that dissolved oxygen (DO), temperature, substrate concentration, pH and microbial community distribution are the main environmental factors affecting the activity of microorganisms in wastewater denitrogenation. Nitrification process Traditional biological nitrification is a two-stage reaction, in which ammonia-oxidizing bacteria (AOB) belonging to Nitrosococcus, Nitrosomonas and Nitrosospira complete the nitrosation process, while nitrite-oxidizing bacteria (NOB) belonging to Nitrobacter, Nitrococuus, Nitrospina and Nitrospira complete the nitrification process [5]. This is a key process in the nitrogen cycle whose production determines the path of the nitrogen cycle. Both AOB and NOB are autotrophic microorganisms, but NOB is more heterogeneous than AOB and is composed of four groups of bacteria, namely Nitrobacter, Nitrococuus, Nitrospina and Nitrospira, which are quite different in evolution, while AOB all belongs to Betaproteobacteria. Nitrobacter, Nitrococuus and Nitrospina belong to Alphaproteobacteria, Gammaproteobacteria and Deltaproteobacteria respectively, while Nitrospira belongs to a special microbial phylum, that is, Nitrospira. At present, the control measures of environmental factors aimed at enhancing nitrification activity of AOB and NOB microbial community are to control the dissolved oxygen (DO), temperature, pH, free ammonia (FA), free nitrite acid (FNA), sludge retention time (SRT), etc. DO As one of the necessary substrates for nitrification, proper DO is an important condition to ensure the activity of nitrifying bacteria. Oxygen supply mode and DO concentration are the key parameters of wastewater treatment process control, which affect the development of microorganisms in nitrogen cycle. The affinity constants of AOB and NOB for oxygen are 0.3 mg/L and 1.1 mg/L, respectively [6], which are quite different. It is also found that AOB has strong tolerance to the change of DO, and the alternate aerobic and anaerobic environment is beneficial to the enrichment and growth of AOB [7]. Therefore, it is feasible to control the appropriate DO level by engineering means for strengthening ecological denitrogenation in river courses. In recent years, many scholars have studied the regulation of NOB growth by controlling DO concentration to maintain long-term stable operation of nitrification system. In the wastewater treatment system, for the suspended sludge system, it is generally considered that short-cut nitrification may be realized when the environmental DO is less than 1.20 mg/L, but it is difficult to realize with high DO. According to Chen et al. [8], Xie Qinglin et al. [9], and Cui et al. [10], at the DO concentration of 0.30~0.50 mg/L, 0.80~1.20 mg/L and 0.10~0.60 mg/L respectively, high NH 4 + -N removal rate and NO 2 --N accumulation in short-cut nitrification are all achieved. At the same time, Wu Chunlei et al. [11] also found that it was difficult to realize short-cut nitrification when DO concentration was 2.00~2.50 mg/L. For attached growing biofilm system, short-cut nitrification can be realized at a higher DO level. There was still confusion in the literatures about the behavior of nitrification bacteria in different DO level. Oxygen supply mode and DO concentration affect the development of microorganisms in nitrogen cycle. Thus, the microbial community structure of AOB and NOB could be adjusted through selecting sludge system and oxygen supply mode to change the supply of O 2 . Temperature Temperature is one of the key factors for denitrogenation efficiency. AOB and NOB are mesophilic bacteria, and their optimum growth temperature is between 20°C and 30°C. Temperature that is too high or too low will have a huge effect on the nitrification rate of nitrifying bacteria. At different temperatures, the growth rates of AOB and NOB are different. It is generally considered that the higher temperature of 30~35°C is the optimum temperature for short-cut nitrification. However, some studies have found that NOB activity is severely inhibited when the temperature is lower than 15°C, while AOB is relatively less inhibited, so NO 2 --N can be accumulated at lower temperature [12]. In addition, different nitrifying bacteria strains are sensitive to temperature in different degree. Siripong et al. [13]monitored the proportion of AOB in wastewater treatment plants, and found that the proportion of AOB changed obviously with the seasons. When the water temperature was low in winter, the proportion of Nitrosospira increased obviously, while when the water temperature was high in summer, the AOB was mainly Nitrosomona. When studying the proportion of NOB at different temperatures, Alawi et al. [14] found that Nitrospira could adapt to a wide range of temperature, and grew normally at 10~28°C; however, when the temperature was lower than 17℃, the growth of Nitrobacter would be limited. Lücker et al. [15] found that Nitrotoga had a higher proportion at low temperature and was the dominant microbial phyla of NOB. pH pH has effects on the population structure of nitrifying bacteria in the following two major aspects. Firstly, the optimum pH value of nitrifying bacteria is between 7 and 8, and the growth rate of nitrifying bacteria will decrease in acidic or alkaline environment. AOB is more sensitive to pH change. Thus, it is necessary to maintain proper pH during nitrification. Secondly, the change of pH will affect the existing forms of ammonia, and then affect the population structure of nitrifying bacteria. With the increase of pH, the proportion of free ammonia (FA) in water will increase, while with the decrease of pH, the corresponding concentration of free nitrite acid (FNA) will increase. Excessive FA and FNA will significantly inhibit the activity of nitrifying bacteria. AOB and NOB have different tolerances to FA, which will inhibit NOB at the concentration of 0.1~1.0 mg/L, but will only inhibit AOB at the concentration of 10~150 mg/L[16]. FNA is an effective bactericide, and its inhibitory effect on NOB is also higher than that on AOB. Studies showed [17] that FNA may inhibit the activity of NOB to be less than 4.8% at the concentration of 0.22~1.35 mg/L, and the activity of AOB is greater than 80% in such case. To sum up, Table 1 summarizes the suitable environmental characteristics of nitrification process. The optimum environment for nitrification process is where the dissolved oxygen is sufficient, the temperature is 20~30℃ and the pH value is 7~8. It shows that sufficient dissolve oxygen (DO) and suitable temperature can remove NH 4 + -N from the water in the river ecosystems. Obviously, the key substrate they are competed was oxygen. Denitrification process The denitrification process is the main process by which nitrogen is separated from the water and it plays an important role in reducing the nitrogen load in black and odorous water. Studies showed that the denitrification of urban river sediments can remove nearly 20% of the nitrogen load from the river [18]. Jin et al. [19] found that heterotrophic nitrification is an important nitrification process for the sediments of the Yangtze River estuary. Denitrifying bacteria mainly consist of heterotrophic denitrifying bacteria and autotrophic denitrifying bacteria, and their reactivity is usually closely related to environmental factors such as DO, temperature, carbon source, pH and carbon nitrogen ratio (C/N). Oxygen content is one of the key factors in the denitrification process of sediments. Among denitrifying bacteria, an important microbial community of denitrifying microorganisms, there are more than 50 genera and more than 130 strains frequently reported. Among them, the relatively common heterotrophic denitrifying bacteria strains are Pseudomonas, Bacillus, Hyphomicrobium, Enterbacter, Micrococcus, Brevibacterium, Alcaligenes, Paracoccus, etc [20]. With further study on heterotrophic denitrification process, recent studies have also found that, in addition to anaerobic denitrification process, there is also aerobic denitrification process, and some aerobic denitrifying bacteria strainshave been isolated, such as hiosphaera pantotropha, Alcaligenes faecalis, Pseudomonas putida, Pseudomonas stutzeri T13, Pseudomonas stutzeri YZN-001, Klebsiella pneumonia CF-S9 and Acinetobacter sp. SYF26. Recently, researchers found that some microorganisms in the wastewater containing nitrogen with low C/N can use reductive sulfides (elemental sulfur, sulfides, thiosulfates, etc.) as electron donors for autotrophic denitrification. The microbial community in this process is widely distributed in sea, land and other natural environment and artificial environments, and the reaction is conducted by chemoautotrophic bacteria, mainly Thiobacillus denitrificans. At present, it is found that sulfur-based autotrophic denitrifying bacteria mainly belong to Proteobacteria and are classified into the following categories [21]: 1)Chemoautotrophic bacteria: mainly including Thiobacillus denitrificans, Thiomicrospira denitrificans, Thiobacillus thioparus, etc. 2)Facultative autotrophic bacteria: mainly including Thiobacillus delicatus, Thiosphaera pantotropha, Thiobacillus thyasiris, Paracoccus denitrificans; 3)Filamentous bacteria, such as Thioplaca and Beggiatoa; 4)Other strains capable of denitrification and denitrogenation, such as Bacillus, Rhodococcus, Pseudomonas and Ochrobactrum. Table 2 summarizes the typical characteristics of sulfur-based autotrophic denitrification [22][23][24][25][26]. Besides being proved in laboratory, the process of sulfur-based autotrophic denitrification has also been tried in practice. Huang et al. [27] maintained the denitrogenation effect of the sulfur-based autotrophic denitrification constructed wetland by intermittent aeration in low temperature (-5~10°C). It found that suitable sulfide can promote denitrification of sediments. Ren et al. [28] successfully treated low-carbon wastewater through sulfur-based autotrophic denitrification constructed wetland, so that it meet the reuse water standard of iron and steel enterprises. Therefore, the denitrification process involves a wide range of microbial community types, and screening different microbial community can meet the denitrogenation requirements of black and odorous water in different environment. It is a good way to remove nitrogen from water by cultivating specific microbial community according to the characteristics of target water and carrying out corresponding engineering strengthening measures. ANAMMOX process Anaerobic ammonium oxidation (ANAMMOX) is a new microbial nitrogen transformation approach, with chemoautotrophic ANAMMOX bacteria as executive bacteria, where N 2 is generated by the direct interaction of NO 2 and NH 4 + without organic carbon source and O 2 . This process does not produce or rarely produces greenhouse gas N 2 O. At present, five genera of Planctomycetales i.e., Brocadia, Kuenenia, Anammoxoglobus, Jetteni and Scalindua have been classified as ANAMMOX bacteria [29]. ANAMMOX bacteria are very sensitive to environmental factors, so their activity is easily affected by environmental factors. However, they are widely distributed in nature, even in extreme temperature and pH environment. 2.3.1. Substrate concentration The study on the NH 4 + -N concentration of ANAMMOX substrate still focuses on the field of high NH 4 + -N (>500 mgN/L) wastewater denitrogenation. Under the low concentration of NH 4 + -N (<100 mgN/L), it is difficult to achieve sufficient and stable supply of key substrate nitrite nitrogen [30]. Therefore, when the NH 4 + -N concentration of the influent is reduced to 100 mg/L, the NH 4 + -N removal rate is reduced to about 40%, and the NH 4 + -N removal load is reduced to about 0.5 kg N·m-3·d-1, in a dramatic manner [31]. 2.3.2. Temperature The optimum growth temperature range of ANAMMOX bacteria is 30~40°C. de Almeida Fernandes et al. found that the denitrogenation efficiency of ANAMMOX decreased with the temperature decreasing from 35°C to 20°C [32], so the temperature of most ANAMMOX reactors maintained above 30℃ [33] the problem that the growth rate of anaerobic ammoxidation bacteria decreased due to low temperature and the operation efficiency of ANAMMOX was low. However, Gilbert et al. [34] also achieved a low denitrogenation efficiency of about 40% in short-cut nitrification-ANAMMOX process at 13°C. Chen et al. [35] found that the activity of ANAMMOX bacteria could not be inhibited at 25°C. 2.3.3. Dissolved oxygen (DO) DO should be strictly controlled in ANAMMOX reaction to avoid reversible or even irreversible inhibition. Egli et al. [36] observed reversible and irreversible ANAMMOX inhibition at DO concentrations of 0.08 mg/L and 1.44 mg/L, respectively. Irreversible inhibition was also observed under intermittent aeration (with DO concentration of 0.1~0.16 mg/L). Therefore, ANAMMOX bacteria need to survive under anaerobic conditions with low DO. 2.3.4. pH A stable pH environment should be maintained for the stable metabolism of ANAMMOX. The pH value can directly affect the growth and enzyme activity of bacteria, and can also indirectly affect the activity of ANAMMOX by affecting the concentration of FA and FNA around the microbial community. The results showed that the optimum pH value for the growth and activity of ANAMMOX bacteria in wastewater treatment was between 7.2 and 7.6 [37], but its adaptable pH range was between 6.5 and 9.3. Because of its strong alkali resistance, it has higher activity against other microbial community in higher pH environment. 2.3.5. Distribution of microbial community In recent years, there have been extensive studies on ANAMMOX in different spatial scales, including river sediments in region scale, paddy soils, river sediments in small area scale, wetland and lake sediments and even micro-scale in rhizosphere soil. It shows that ANAMMOX bacteria exist widely in different scales, especially in all sediments studied. Therefore, geographical distribution and community status of ANAMMOX bacteria can be used to effectively enhance the role of microorganisms in water nitrogen cycle. Sediment is considered to be the key part of denitrification and ANAMMOX in water system due to the existence of oxygen-anoxic interface, and the microorganisms at this interface have obvious denitrogenation activity. Some studies have observed that there is a high ANAMMOX rate at the upper limit of saturated soil, which indicates that the ANAMMOX process has significant activity at the water-land interface [38]. The results also showed that ANAMMOX was widespread in freshwater, and high abundance of ANAMMOX could be detected in dryland soil with high water content (>29%) [39]. Even some studies have shown that ANAMMOX bacteria could be reactivated by water after long-term dormancy in arid soil [40]. In addition, in the ocean, up to 50% of nitrogen cycle is caused by ANAMMOX process. Table 3 summarizes the common environments of ANAMMOX bacteria. Marine environment In recent years, a biochemical reaction with the characteristics of "ANAMMOX" and "iron reduction" has been discovered, which is called Feammox. According to the study of Huang et al. [41], when the culture system DO<0.02 mg/L, it was beneficial to ammonia oxidation. Compared with DO>0.02 mg/L, the abundance of Acidimicrobiaceae bacterium A6 capable of ammonia oxidation and iron reduction increased in total bacteria, while the abundance of other AOB and ANAMMOX bacteria decreased. This strain called A6 provides a microbial basis for Feammox. Liu [42] considered that Feammox can adapt to acidic environment (pH 4~5) and keep strong activity at low temperature, and it can be combined with ANAMMOX. Huang et al. [41] found that 64.5% of ammonia nitrogen removal rate was achieved in the continuous flow anaerobic biofilm reactor with pH value of 4~5 It can be seen that ANAMMOX is suitable in the anaerobic weak alkaline environment, and ANAMMOX bacteria community is widely distributed in nature, which can be cultivated and enriched in most environments. In weak acidic environment, if suitable iron ions are provided as electron donors for its reaction, Feammox can occur, and nitrogen can also be removed, so it is widely used in denitrogenation of river ecosystems. To sum up, DO concentration is very important for microbial nitrogen removal, and nitrification, denitrification and ANAMMOX have different requirements for it. Therefore, creating DO concentration gradient in river channel is the premise of microbial nitrogen removal. At present, anaerobic and aerobic alternate environment can be created in river channel through constructed wetland structure design, microbial embedding and biofilm filler. There are many environmental factors that affect the function of nitrogen cycling microorganisms. Although the microbial community is widely distributed, the application of microbial nitrogen removal technology in black and odorous water still faces a series of difficulties. It is a feasible means to control and create suitable conditions for different nitrogen removal processes by engineering means. Therefore, the potential existence of the metabolic pathways proposed here may provide a new idea made with biofilm systems consisting of aerobic ammonia oxidizing bacteria in the outer layers of the biofilm, and anammox bacteria in the anaerobic core. In these biofilm systems, the community structure of AOB and NOB may play the key role who would be frequently encountered at the aerobic-anaerobic interface. In-situ remediation through microbial agents Because the natural purification speed of indigenous microorganisms in water environment is slow, in order to degrade pollutants as soon as possible and restore water ecology, people pay attention to applying microbial agents in pollutant removal from rivers. Due to strong target, high activity and relatively low cost, microbial agents have many experimental studies and practical applications in the treatment of black and odorous rivers. Tang et al. [43] used denitrogenation bacteria agent and biological stimulants to remediate nitrogen contaminated sediments. After 115 days of treatment, the removal rate of TN in sediments was 14.7%, and the removal rate of nitrate nitrogen in the overlying water was also improved. It was found that the diversity of bacterial community in the sediments increased significantly, and the dominant bacterial phyla were Nitrospirae, Deferribacteres and Chloroflexi. Guo et al. [44] added Pseudomonas stutzeri strain T1 to Taihu Lake water samples. It was found that the removal rate of ammonium nitrogen and nitrate nitrogen by the strain reached 60% and 75% respectively, and the final sample water quality was improved from Class V to Class II. Wu et al. [45] put denitrogenation bacteria agent into a river in Xinbei District, Changzhou City, and found that the agent had certain purification effect on excessive NH 3 -N, TN and TP in the river. The degradation efficiency for NH 3 -N is higher but easy to rebound and rise. Wang [46] added compound microbial agent in the Qiongjiang River at Xiaodu section of Tongnan District for biological enhanced treatment of water. The removal rates of COD, NH 3 -N and TP reached 51.72%, 41.44% and 52.38% respectively, which proved that biological enhanced technology can effectively treat river. Therefore, it can be found that suitable screened microbial agents have ideal denitrogenation capacity for black and odorous water. However, these remain to be studied about the impact of microbial agents on the environment and how to keep the permanent effect. Constructed wetland treatment Constructed wetland is called ecological wastewater treatment plant, which removes pollutants from wastewater by physical adsorption and biodegradation. Because of its simple operation and maintenance, low management cost, no secondary pollution and landscape value, constructed wetland has been widely used. However, the denitrification efficiency of constructed wetland depends on organic carbon source, and the lack of organic carbon is the primary obstacle to nitrate removal. And the biggest obstacle to the development of constructed wetlands is that the purification efficiency of it will be affected by seasonal low temperature. [47] used the combined constructed wetland for pollution treatment of Zaohe River in Xi'an. Using the combined process of horizontal subsurface flow and surface flow for constructed wetland, they construct a wetland with an area of 8000 m 2 , and operated for two years with a treatment capacity of 362 m 3 /d. The average treatment efficiencies of COD, SS, NH 4 + -N and TP were 74.5%, 92%, 57.5% and 69.2% respectively. Huang et al. [48] improved denitrogenation capacity and reduced greenhouse gas emissions by constructing a comprehensive vertical flow wetland, and found that ANAMMOX, denitrifying anaerobic methane oxidation (DAMO) and denitrification reactions can occur simultaneously in the wetland, and the denitrification process is stable in the system, which proved that the denitrogenation ability for polluted water can be enhanced by coupling various denitrogenation processes in the constructed wetland. By combining aerobic/anoxic process and Fe/C micro electrolysis technology with vertical flow constructed wetland, Deng et al. [49] established efficient water system composed of aerobic organisms (e.g. Nitrosomonas), anaerobic organisms (e.g. Nitrospira), autotrophic denitrifying bacteria (e.g. Thiobacillus, Hydrogenophaga and Thiobacillus), heterotrophic denitrifying bacteria (e.g. denitrifying bacteria) and Fe oxidizing bacteria (FOB) (e.g. Acidithiobacillus ferrooxidans). The removal rates of ammonia nitrogen, total nitrogen, total phosphorus and COD were 94.3%, 86.2%, 98.0% and 92.7% respectively in the low temperature environment of -11.5°C~8.0°C, achieving excellent effects of grey water treatment. Purification through ecological floating bed Under the principle of ecosystem, ecological floating bed (EFB) uses environment-friendly materials to build environment for aquatic plant culture and growth in water, integrates biological dynamics and soilless culture, and plants terrestrial or higher aquatic plants in polluted water, so as to purify water, provide beautiful environment and a place for birds and fish to perch and lay eggs. It is not affected by water depth and eutrophication degree, and can improve landscape and save construction cost. Olguín et al. [50] established and tested the EFBs in two artificial ponds, and found that the synergistic effect of microorganisms and plants can effectively remove nutrients, especially nitrate, increase DO in water to a certain extent, and remove 9%~86% of Escherichia coli. But the development of EFB is restricted by flood control and drainage requirements in most districts. To sum up, because the input of exogenous nitrogen in urban rivers is complex and influenced by many environmental factors, the internal reaction of river sediments is more intense and the reaction mechanism is more complex than that of other sediments. The improvement of black and odorous water is a highly professional systematic project, and it is difficult to eliminate black and odorous water by using a single treatment technology. Therefore, in practical engineering, different technologies need to be combined and applied, and a long-term mechanism should be established for permanently clean water. Conclusion Microbial remediation technology is a simple, economical and efficient technology with short remediation time and little impact on the surrounding environment, which has broad application prospects in the field of urban black and odorous water treatment. There are a large number of microorganisms related to nitrogen cycle in riverbank sediments and bottom sediments, but the microbial community structure established under the traditional theory of biological denitrogenation from wastewater has high requirements for the environment and is difficult to support its enrichment and efficient denitrogenation in rivers. The microbial nitrogen cycle involves a wide range of microbial types, and screening different microbial community can meet the denitrogenation requirements of black and odorous water in different environment. Therefore, it is a good way to cultivate specific microbial community. In practice, microbial agents can effectively purify water in a short time, but the relatively simple microbial community structure is not suitable for maintaining the purification effect for a long time. At present, the purification efficiency of constructed wetlands is also affected by seasonal low temperature environment, and the development of EFB is restricted by flood control and drainage requirements. Therefore, the practical forms of microbial denitrification in black and odorous water need to be further explored. Due to the different requirements of DO and substrate in different processes of nitrogen cycle, reasonable microbial community structure of biofilm in natural environment is particularly important for its stability and adaptability of denitrogenation capacity. Therefore, further study on the microbial community structure is necessary for the permanent clean rivers. Future studies under the field of urban black and odorous water treatment will need to clarify which conditions specifically build for ideal microbial community structure, and what their ecological niche is in these kinds of environments.
2021-08-03T20:03:51.907Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a89b9b956d742dce7f8ba30ff0e852edf33985e2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/825/1/012011", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a89b9b956d742dce7f8ba30ff0e852edf33985e2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
244705023
pes2o/s2orc
v3-fos-license
Roles of Polycomb complexes in regulating gene expression and chromatin structure in plants The evolutionary conserved Polycomb Group (PcG) repressive system comprises two central protein complexes, PcG repressive complex 1 (PRC1) and PRC2. These complexes, through the incorporation of histone modifications on chromatin, have an essential role in the normal development of eukaryotes. In recent years, a significant effort has been made to characterize these complexes in the different kingdoms, and despite there being remarkable functional and mechanistic conservation, some key molecular principles have diverged. In this review, we discuss current views on the function of plant PcG complexes. We compare the composition of PcG complexes between animals and plants, highlight the role of recently identified plant PcG accessory proteins, and discuss newly revealed roles of known PcG partners. We also examine the mechanisms by which the repression is achieved and how these complexes are recruited to target genes. Finally, we consider the possible role of some plant PcG proteins in mediating local and long-range chromatin interactions and, thus, shaping chromatin 3D architecture. INTRODUCTION The development of multicellular organisms relies on cells and tissues that establish unique gene expression programs. These programs depend on transcription factors (TFs) that guide the binding of the transcriptional machinery to specific gene promoters and control its subsequent activity. It has become evident that nucleosome organization within chromatin can either expose or hide functionally important regulatory DNA sequences that are required for the binding of TFs and the transcriptional machinery, thus strongly influencing gene expression. Furthermore, nucleosome organization can determine the formation of higher-order structures that maintain transcriptional states through cell division. The organization of nucleosomes within chromatin can be altered in several ways (Rosa and Shaw, 2013), including the incorporation of posttranslational modifications on nucleosomal histone tails (Bannister and Kouzarides, 2011). The importance of histone modifications in the control of developmental processes was revealed a long time ago during investigations of the role of Polycomb Group (PcG) proteins. PcG genes were initially discovered in Drosophila melanogaster (Drosophila), where they are necessary for the repression of homeotic genes and therefore for the specification of the body plan (for review, see Kassis et al., 2017). Subsequently, Drosophila PcG genes were shown to have homologs in vertebrates and plants, where, among other processes, they also regulate development (reviewed in Whitcomb et al., 2007;Mozgova and Hennig, 2015;Xiao and Wagner, 2015). In this review, we discuss the current view of how the PcG system regulates gene expression in plants. We first present the composition and activities of PcG complexes as a prerequisite to understanding how PRC1 and PRC2 perform their functions. Excellent Schematic representation of PRC1 E3 modules in Drosophila, vertebrates, and Arabidopsis and accessory proteins found in canonical PRC1s (cPRC1s) and variant PRC1s (vPRC1s) in the different cases. Possible accessory proteins that have not been verified by AP-MS are indicated in a dotted background. The complete names of vertebrate accessory proteins that are not mentioned in the text are: AUTS2, autism susceptibility protein 2; BCOR, BCL-6 co-repressor; CK2, casein kinase 2; FBRS, fibrosin; L3MBTL2, lethal(3) malignant brain tumor-like protein 2; SKP1, Sphase kinase-associated protein 1; DCAF7, DDB1 and CUL4-Associated Factor 7; and WDR5, WD40repeat protein 5. RING1 and PCGF proteins dimerize through their RING domains, and this dimerization facilitates their interaction with an E2conjugating enzyme to enable histone ubiquitination (Bentley et al., 2011). The RAWUL domain binds to a range of accessory subunits to give rise to different PRC1s (for review see Geng and Gao, 2020;Barbour et al., 2020;Blackledge and Klose, 2021). Drosophila Sce, Psc, and Su(z)2 also contain these domains; however, Psc and Su(z)2 have, in addition, a long, intrinsically disordered and positively charged C-terminal region (CTR) (Lo et al., 2009;Beh et al., 2012) (Figure 1). This CTR is able to bind chromatin and DNA in vitro (King et al., 2005;Emmons et al., 2009;Beh et al., 2012;Kang et al., 2020) and to mediate chromatin compaction (Francis et al., 2004), nucleosome bridging (Lo et al., 2012), transcription inhibition (King et al., 2005), and chromatin remodeling (King et al., 2005;Lo and Francis, 2010). Several variant PRC1s (vPRC1s) have subsequently been identified. In Drosophila, the dRing-associated factor (dRAF) complex contains Sce, Psc/Su(z)2, and the H3K36-specific demethylase dKDM2 (Lagarou et al., 2008). The Drosophila homolog of the vertebrate Ring1 and Ying Yang (YY)1 binding protein (dRybp) has also been connected to this complex (Fereres et al., 2014) (Figure 1). The dRAF complex promotes both monoubiquitination of H2A and demethylation of H3K36me2 (Lagarou et al., 2008), and most of the H2A monoubiquitination activity in Drosophila has been attributed to this complex. However, loss of Psc/Su(z)2 function decreases total H2A monoubiquitination by 30%-40%, whereas the knockdown of L(3)73Ah, a less-characterized PCGF homolog in Drosophila (Irminger-Finger and N€ othiger, 1995), decreases H2A monoubiquitination by 70%. This result suggests the existence of an alternative L(3)73Ah-dRing/Sce module that is responsible for incorporating most of this modification (Lee et al., 2015). In vertebrates, vPRC1s contain an E3 module composed of RING1A/B, one of the six PCGFs, and RYBP or its paralog YY1-associated factor (YAF) 2, which regulates the catalytic activity of the E3 module ( Figure 1) (reviewed in Geng and Gao, 2020;Barbour et al., 2020;Blackledge and Klose, 2021). The associations of CBX proteins and RYBP/YAF2 with the E3 module are mutually exclusive. Whereas RYBP/YAF2 strongly stimulates the activity of the module, CBX proteins have a less stimulatory effect. Accordingly, cPRC1 displays much lower E3 ligase activity than the vPRC1s (Buchwald et al., 2006). In addition, each vPRC1 presents different accessory proteins depending on the PCGF that it contains ( Figure 1). Although we are not going to describe the vast array of vPRC1 accessory proteins identified in vertebrates (for review see Barbour et al., 2020;Chetverina et al., 2020;Geng and Gao, 2020;Blackledge and Klose, 2021), we would like to highlight the diversity of their biochemical properties. For instance, these proteins include DNA binding activities that can target PRC1 to specific sites (e.g. KDM2B, the dimer MYC-associated factor X [MAX] and MAX gene-associated protein [MGA], and the dimer E2F6 and dimerization partner [DP]1/2), histone-modifying activities directed to remove counteracting marks (e.g. the histone deacetylases HDAC1 and HDAC2 and the H3K36me2 demethylase KDM2B), and proteins that regulate PRC1 activity (e.g. the ubiquitin C-terminal hydrolase ubiquitin-specific protease 7 [USP7] that prevents self-ubiquitination of RING1 and PCGF to stabilize the complex and thus regulates H2A monoubiquitination levels [de Bie et al., 2010;Maertens et al., 2010;Hu et al., 2014;Maat et al., 2021]) ( Figure 1). Interestingly, a recent work has revealed the existence of BMI1like truncated genes (BMI1-L) restricted to embryophytes ( Figure 1). These genes encode a RAWUL domain but lack the N-terminal RING finger domain, which is the catalytic part of BMI1. They share part of the BMI1 exon-intron structure, suggesting a common evolutionary history. A BLASTP search identified 138 homologs. Whereas Marchantia polymorpha and Selaginella moellendorffii contain a single gene, Physcomitrella patens, Amborella trichopoda, Pinus pinaster, and most dicots contain two genes, and monocots usually contain three . The tomato, rice, and maize homologs are involved in shoot apical meristem and axillary meristem development (Tabuchi et al., 2011;Yao et al., 2019;Ló pez et al., 2021). One of the tomato homologs, Super determinant1A (Sde1A), and the Marchatia polymorpha homolog, MpBMI1-L, have been shown to genetically interact with the BMI1 proteins Ló pez et al., 2021). These findings raise the possibility that these genes may be involved in the regulation of PRC1 activity by, for instance, competing with BMI1 for interaction partners or, alternatively, in the formation of non-enzymatically active vPRC1s. In any case, a cPRC1 is not conserved in plants, as there are no plant homologs of the Pc/CBXs, Ph/PHCs, or SCM proteins. Nevertheless, it was proposed that plants have a cPRC1-like complex that contains RING1A/B, BMI1A/B/C, and the plantspecific proteins LIKE-HETEROCHROMATIN PROTEIN 1 (LHP1), also known as TERMINAL FLOWER 2 (TFL2), and EM-BRYONIC FLOWER 1 (EMF1). LHP1 was considered to be the functional equivalent of the Pc/CBX proteins because, although LHP1 contains a CHROMO domain and a CHROMO SHADOW domain like animal Heterochromatin Protein 1 (HP1) (Gaudin et al., 2001;Kotake et al., 2003), it localizes in euchromatin and binds H3K27me3 marks (Zhang et al., 2007b;Turck et al., 2007). In support of this notion, LHP1 was shown to interact with the RING1 and BMI1 proteins in yeast two-hybrid assays (Xu and Shen, 2008;Chen et al., 2010). However, recent affinity purification followed by mass spectrometry (AP-MS) experiments have revealed that LHP1 co-purifies with PRC2 (Derkacheva et al., 2013;Liang et al., 2015;Bloomer et al., 2020). Similarly, EMF1 was proposed to be a PRC1 component because of its structural and functional similarities to Drosophila Psc-CTR Beh et al., 2012). However, EMF1 is required for H3K27me3 marking at some PcG target genes Kim et al., 2012;Yin et al., 2021), and it also co-purifies with PRC2 components (Liang et al., 2015;Bloomer et al., 2020). Therefore, LHP1 and EMF1 are now considered PRC2-associated proteins and, as such, they are included in the next section. In addition, it has been shown that ALFIN1-like 6 (AL6) interacts with RING1 and BMI1 proteins (Molitor et al., 2014). AL1-7 form a plant-specific protein family whose members contain a PAL domain at the N terminus and a plant homeodomain (PHD) at the C terminus (Molitor et al., 2014). The PHD domain of AL proteins was reported to bind H3K4me3 (Zhao et al., 2018), and the PAL domain has been involved in the interaction with RING1 and BMI1 (Peng et al., 2018). Based on several results, the ALs have been proposed to target PRC1 to H3K4me3marked active chromatin in order to promote the transition from the active to the repressed stage (Molitor et al., 2014;Peng et al., 2018). Interestingly, two recent reports have shown that Nodulin Homeobox Factor (NDX) interacts with the RING1s but not with the BMI1 proteins and that it co-purifies with VAL1, BMI1, and RING1 in AP-MS experiments (Zhu et al., 2020;Mikulski et al., 2021). Furthermore, the levels of H2AK121ub are affected in ndx mutants (Zhu et al., 2020;Mikulski et al., 2021), pointing to NDX as a new PRC1 accessory protein. NDX is an atypical PHD-containing protein that has been previously implicated in single-stranded DNA recognition and stabilization of the R-loop formed at the 3 0 end of FLOWERING LOCUS C (FLC) (Sun et al., 2013), as well as the regulation of abscisic acid signaling (Zhu et al., 2020); however, its exact role in PRC1 function remains to be investigated. VERNALIZATION 1 (VRN1), a B3 domain-containing plant-specific protein involved in the vernalization response (Levy et al., 2002), has also been proposed to be a PRC1 component (Mylne et al., 2006;Holec and Berger, 2012); however, there is not yet biochemical evidence to support this association. Finally, the H3K4me3 demethylase JMJ14 was shown to interact with BMI1A/B and EMF1 (Wang et al., 2014); however, it has also recently been found to co-purify with PRC2 (see next section). In summary, there are not many confirmed PRC1 accessory proteins in plants. In addition, it is not known whether these or other unknown accessory proteins associate with different E3 modules constituting different plant vPRC1s. It is likely that, as in vertebrates, different specialized complexes act in different tissues or developmental stages. The number of plant PRC2-associated proteins has recently significantly increased PRC2 core components are broadly conserved ( Figure 2). These core components are essential for the H3K27 trimethyltransferase activity of the complex (reviewed in Blackledge et al., 2015;Chetverina et al., 2020;Bieluszewski et al., 2021;Blackledge and Klose, 2021). In Drosophila and vertebrates, PRC2 diversity is achieved through the association of accessory proteins that can modulate its enzymatic activity or chromatin target sites. There are two PRC2s, PRC2.1 and PRC2.2. PRC2.1 includes Polycomb-like (Pcl) in Drosophila and the mutually exclusive Pcl homologs PCL1, PCL2, and PCL3 in vertebrates. These proteins stimulate the methyltransferase activity of E(z)/EZH (Nekrasov et al., 2007;Sarma et al., 2008). Vertebrate PRC2.1 also contains Elongin BC and PRC2associated protein (EPOP) and PRC2-associated LCOR isoform 1 (PALI1) or PALI2 (Beringer et al., 2016;Liefke et al., 2016;Conway et al., 2018), which also support methyltransferase activity through an undefined mechanism. These proteins have not been found in Drosophila PRC2.1 (Figure 2). PRC2.2 is defined by the presence of Jardin2 in Drosophila or JARID2 in vertebrates, which bind to H2A monoubiquitination marks through their N-terminal ubiquitin interaction motif, and Jing in Drosophila or AEBP2 in vertebrates, which enhance the enzymatic activity and regulate the chromatin binding of the complex ( Figure 2) (Chetverina et al., 2020;Blackledge and Klose, 2021). In Arabidopsis, the number of identified PRC2 accessory proteins has significantly increased in recent years. These proteins co-purify with PRC2 core components and affect H3K27me3 levels ( Figure 2). However, with several exceptions, it is not clear whether they preferentially associate with a specific PRC2 core or whether they give rise to sub-complexes through their mutually exclusive association with the same PRC2 core. As mentioned in the previous section, despite the fact that LHP1 and EMF1 were considered PRC1 components, they co-purify with PRC2 core components and affect H3K27me3 levels (Derkacheva et al., 2013;Liang et al., 2015;Zhou et al., 2017;Bloomer et al., 2020;Yin et al., 2021). Thus, although in animals these two activities are associated with PRC1, in plants they are included in the list of PRC2-associated proteins (Figure 2). Similarly, in the yeast Cryptococcus neoformans, the PRC2 components co-purify with a CHROMO domain-containing subunit, Ccc1, that recognizes H3K27me3 (Dumesic et al., 2015) as LHP1 does, suggesting that this could be a more common event than expected based on the information available from Drosophila and vertebrates. More recently, telomere repeat-binding factors (TRB1-3), which contribute to PRC2 recruitment (Zhou et al., 2018a), and the harbinger transposase-derived ANTAGO-NISTIC OF LHP1 1 (ALP1) and ALP2 proteins, which antagonize PRC2 activity, have also been shown to co-purify with PRC2 core components (Liang et al., 2015;Velanis et al., 2020) (Figure 2). Interestingly, it seems that LHP1 and EMF1 do not associate with PRC2 when the ALP proteins are present, and vice versa , suggesting that they are accessory components of separate sub-complexes ( Figure 2). PRC2 core components also co-purify with the ubiquitin C-terminal hydrolases UBIQUITIN-SPECIFIC PROTEASE 12 (UBP12) and UBP13 (Liang et al., 2015;Derkacheva et al., 2016;Bloomer et al., 2020) (Figure 2). Phylogenetic analysis of these proteins showed that they share a similar protein sequence with the vertebrate vPRC1 accessory protein USP7 (March and Farrona, 2017;Derkacheva et al., 2016). USP7 regulates PRC1 activity by maintaining the integrity of the complex. Consequently, USP7 inhibition results in decreased levels of H2A monoubiquitination at PRC1 target loci (de Bie et al., 2010;Maertens et al., 2010;Hu et al., 2014;Maat et al., 2021). A recent report showed that the ubp12/13 mutant displays a higher number of upregulated genes than downregulated genes, suggesting a role for UBP12/13 in repression. Also, a larger number of genes lose H2AK121ub than gain H2AK121ub in the ubp12/13 mutant, and it therefore shows a global reduction in H2AK121ub levels (Kralemann et al., 2020). These results suggest a role of UBP12/13 in regulating PRC1 integrity/ activity, similar to that of USP7. However, based on analyses of the less-abundant loci that gained H2AK121ub in the ubp12/13 mutant, it has been proposed that UBP12/13 may directly regulate H2A monoubiquitination levels (Kralemann et al., 2020). EARLY BOLTING IN SHORT DAYS (EBS) and its homolog SHORT LIFE (SHL), which contain an N-terminal Bromo-Adjacent Homology (BAH) domain followed by a PHD domain with a C-terminal extension , also co-purify with PRC2 core components (Figure 2). These proteins bind to H3K4me3 and H3K27me3 marks, interact with EMF1, and are required for the maintenance of H3K27me3 levels. ebs/shl/lhp1 triple mutants show a significant global reduction in H3K27me3 levels , leading to propose that, together with LHP1, they are the H3K27me3 readers in Arabidopsis (Krause and Turck, 2018). In line with this proposal, EPR-1, a BAH protein of the filamentous fungus Neurospora crassa, has been show to bind H3K27me2/3 and to be implicated in gene repression. Although PRC1 core components are absent in N. crassa, EPR-1 acts as an H3K27me2/3 reader and is required for the formation of nuclear foci reminiscent of Polycomb bodies. epr-1 mutants do not exhibit appreciably altered H3K27 methylation levels, indicating that EPR-1 acts downstream of PRC2 (Wiles et al., 2020). Moreover, vertebrate BAH and coiled-coil domain-containing protein 1 (BAHCC1) and BAH domaincontaining protein 1 (BAHD1) also act as H3K27me3 readers and interact with HDACs, resulting in optimal repression of PcG target genes (Fan et al., 2020(Fan et al., , 2021. Phylogenetic analysis has shown that EPR-1 orthologs are widely distributed across the eukaryotes, suggesting an ancient role of EPR-1 homologs in PcG repression (Wiles et al., 2020). Interestingly, a member of a novel family of putative Jumonji-type 2-oxoglutarate/Fe(II)-dependent dioxygenases, INCURVATA 11 (ICU11), robustly co-purified with PRC2 core components and accessory proteins (Bloomer et al., 2020) (Figure 2). Neither ICU11 nor EMF1 where found to interact with VIN3 or its homologs VRN5, VEL1, and VEL2, reinforcing the view that distinct PcG complexes operate over different spatial and temporal scales. Several lines of evidence support a role for ICU11 in mediating H3K36me3 demethylation, suggesting that ICU11 may enable the transition from an H3K36me3-marked active to an H3K27me3-marked inactive chromatin state (Bloomer et al., 2020). The H3K4me3 demethylase JMJ14 was detected in ICU11-enriched peptides and also interacts with EMF1 (Wang et al., 2014), supporting its association with PRC2 ( Figure 2). Therefore, a combination of activities in the whole complex would link demethylation of active histone modifications and the incorporation of repressive modifications. Unlike in animals (Blackledge and Klose, 2021), a PRC2associated protein that can recognize and bind H2A monoubiquitination marks has not yet been identified in plants. Nevertheless, the Arabidopsis ZUOTIN RELATED FACTOR 1 (ZRF1) homologs AtZRF1a and AtZRF1b have been proposed to perform PcG-related and PcG-independent functions (Feng et al., 2016). All homologs of ZRF1 have a zuotin domain at their N terminus, which is composed of a DnaJ domain and a potential ubiquitin-binding domain, and tandem repeats of the SANT domain at their C terminus (Feng et al., 2021). Vertebrate ZRF1 can specifically recognize and bind to monoubiquitinated H2A (Richly et al., 2010;Barbour et al., 2020). Likewise, AtZRF1b can bind ubiquitin in vitro and pull down monoubiquitinated H2A from plant protein extracts, acting as a possible H2AK121ub reader (Feng et al., 2016(Feng et al., , 2021. However, in contrast to vertebrate ZRF1, which competes with PRC1 to activate many PcG-repressed target genes, AtZRF1a/b seem to promote H3K27me3 deposition for gene repression (Feng et al., 2016), which may provide a mechanism for PRC2 reading of H2AK121ub after PRC1 deposition. Nevertheless, it remains to be investigated whether ZRF1 proteins associate with PRC2 in Arabidopsis. Finally, although they have not been found to co-purify with PRC2 core components, the PWWP domain-containing proteins PWWP DOMAIN INTERACTOR OF POLYCOMBS 1-4 (PWO1-4), which are H3 readers via their PWWP domain, have been shown to interact with the three H3K27 trimethyltransferases in Arabidopsis and to recruit PcG proteins to subnuclear domains (Hohenstatt et al., 2018). In addition, another subgroup of PWWP domain proteins (PDP1-3) has been proposed to regulate PcG function (Zhou et al., 2018b), suggesting a role for PWWP domain-containing proteins in sustaining PcG function. In summary, the existence of this long list of PRC2 accessory proteins indicates that the regulation of PRC2 activity may be critical for specifying cell identity during development and differentiation. Role of PcG complexes and their hallmarks in plant transcriptional regulation PcG complexes play a crucial role in the regulation of gene expression; however, the exact mechanism by which they exert this regulation is not yet clear. Previous studies in Drosophila and vertebrates proposed a PRC2-initiated hierarchical model in which PRC2 establishes H3K27me3, which is recognized by the PRC1 subunit Pc/CBXs. PRC1 in turn incorporates H2A monoubiquitination to maintain stable repression (Wang et al., 2004). However, later studies in vertebrates found that the activity of vPRC1s can recruit PRC2 for H3K27me3 marking, revealing a reverse alternative model for the incorporation of PcG marks (Blackledge et al., 2014;Kalb et al., 2014). In support of this notion, several results indicate that the recognition of H2A monoubiquitination is crucial for PRC2 nucleation at target sites. Moreover, the PRC2 accessory proteins Jarid2/JARID2 are able to bind H2A monoubiquitination marks (Barbour et al., 2020). The incorporation of H3K27me3 is then necessary to reinforce the binding of PRC2 and the spread of H3K27me3, in which EED and possibly other H3K27me3 readers play an important role (Blackledge and Klose, 2021). Nevertheless, because H3K27me3 is also recognized by the Pc/CBXs, PRC2 can in turn recruit cPRC1, indicating that both models, the reverse and the classical, take place in animals. Interestingly, cPRC1 contributes only minimally to H2A monoubiquitination levels in vivo, whereas it seems to be required for the formation of Polycomb bodies (Blackledge and Klose, 2021), as will be discussed below. Despite all these results, the participation of H2A monoubiquitination in gene repression has remained controversial. Some reports have provided evidence that it is indispensable (Kundu et al., 2017), whereas others have shown that it is not (Illingworth et al., 2015;Pengelly et al., 2015). Therefore, additional work is needed to determine which chromatin context acts in concert with this modification to regulate gene expression. Genome-wide distribution analyses of H2AK121ub and H3K27me3 marks in Arabidopsis showed that most H2AK121ub peaks overlap with regions immediately downstream of transcriptional start sites Kralemann et al., 2020). Conversely, H3K27me3 peaks occupy complete gene bodies and thus partially overlap with H2AK121ub (Zhang et al., 2007a;Turck et al., 2007;Lafos et al., 2011;Zhou et al., 2017;Kralemann et al., 2020), unlike their extended co-localization in animals. In addition, H2AK121ub marks in Arabidopsis are widespread, often colocalizing with H3K27me3 but also occupying a set of genes devoid of H3K27me3 Kralemann et al., 2020), revealing three different subsets of PcG targets based on the presence of H2AK121ub, H3K27me3, or both. Transcriptional analyses of these PcG target subsets showed that most H3K27me3-marked genes are not expressed or display low expression levels in WT seedlings, consistent with previous reports (Zhang et al., 2007a;Turck et al., 2007;Lafos et al., 2011). Conversely, a considerable percentage of genes with only H2AK121ub marks are transcriptionally active Kralemann et al., 2020), although their average expression levels are still lower than those of active genes that lack H2AK121ub marks (Yin et al., 2021). These results indicate that the two modifications may have different roles in transcriptional repression. In addition, profiling of H2AK121ub and H3K27me3 marks in the clf28swn7 mutant showed that PRC2 activity was not required for H2AK121ub marking , ruling out the classical hierarchical model for the incorporation of PcG marks proposed in animals. By contrast, loss of BMI1 function affects the incorporation of H3K27me3 at the genes in which H2AK121ub and H3K27me3 marks co-localize Kralemann et al., 2020), consistent with the reverse alternative model (Blackledge et al., 2014;Calonje, 2014;Kalb et al., 2014;Merini and Calonje, 2015). PRC1 components have been shown to interact with PRC2-associated proteins in Arabidopsis. For instance, RING1 and BMI1 interact with LHP1 and EMF1 (Xu and Shen, 2008;Bratzel et al., 2010), which may suggest that these interactions, rather than H2AK121ub, promote PRC2 recruitment and H3K27me3 marking. However, recent results in Marchantia polymorpha, which shares many signaling pathways with Arabidopsis but displays low genetic redundancy, revealed that H2A monoubiquitin marks are essential for PRC2 recruitment . Through mutation of the single gene encoding canonical H2A in M. polymorpha, H2A monoubiquitination was shown to be required for H3K27me3 incorporation . Consistent with this result, recent works showed that loss of vertebrate RING1B catalytic activity largely phenocopies the complete removal of the RING1B protein (Blackledge et al., 2020;Tamburri et al., 2020). Furthermore, the animal PRC2 accessory protein Jarid2/JARID2 binds H2A monoubiquitination marks, supporting the participation of this modification in PRC2 recruitment (Barbour et al., 2020). Unfortunately, an Arabidopsis PRC2 accessory protein that can bind H2AK121ub marks has not been identified; however, the fact that AtZRF1a/b proteins bind H2AK121ub and promote H3K27me3 marking makes them potential candidates to carry out this function. Alternatively, another unknown H2AK121ub reader associated with PRC2 may perform this function. In any case, recent results indicate that a combination of both H2AK121ub binding and direct interaction between PRC1 and PRC2 components may participate in PRC2 recruitment Baile et al., 2021;Liu et al., 2021). Despite the implication of H2K121ub in PRC2 recruitment, the fact that average transcription levels of genes marked with H2AK121ub only are higher than those of genes marked with H2AK121ub/H3K27me3-and those of H2AK121ub/ H3K27me3-marked genes are higher than those of H3K27me3marked genes-led us to propose that H2AK121ub is not a repressive mark per se (Kralemann et al., 2020). However, in support of a repressive role for this modification, it has recently been shown that BMI1 proteins also mediate monoubiquitination of the H2A variant H2A.Z. H2A.Z can be monoubiquitinated at lysine 129 (H2A.ZK129ub). The incorporation of this modification is required for H2A.Z-mediated transcriptional repression (Gó mez-Zambrano et al., 2019). Transcriptomic comparison among the H2A.Z mutant hta9hta11, hta9hta11 complemented with a native form of H2A.Z, and hta9hta11 complemented with a mutated H2A.Z that cannot be monoubiquitinated showed that most genes upregulated in hta9hta11 recovered WT-like expression levels in the presence of native H2A.Z, but not in the presence of mutated H2A.Z (Gó mez-Zambrano et al., 2019). Interestingly, a considerable number of the genes upregulated in hta9hta11 are already active in the WT, but their expression levels are further increased in hta9hta11, suggesting that H2A.ZK129ub marks modulate the expression of target genes. Furthermore, a recent report showed that H2AK121ub marks are associated with a less accessible chromatin state at transcriptional regulation hotspots that are enriched for the binding of TFs (Yin et al., 2021), and this presumably interferes with transcription. This report showed that decreased levels of H2AK121ub at both only-H2AK121ub and H2AK121ub/ H3K27me3-marked chromatin led to an increase in chromatin accessibility. Nonetheless, the fact that average expression levels of only-H2AK121ub genes are higher than those of H2AK121ub/H3K27me3 genes indicates that chromatin marked only with H2AK121ub is transcriptionally permissive, suggesting a role for this modification in modulating rather than switching off gene expression (Figure 3). Chromatin accessibility is further reduced when H3K27me3 is present. However, although H2AK121ub/H3K27me3-mediated inaccessible chromatin is still transcriptionally responsive, as it can be reactivated when the levels of these modifications are reduced, only-H3K27me3 marked chromatin is less responsive. Interestingly, only-H3K27me3 chromatin is not usually associated with transcriptional regulation hotspots (Yin et al., 2021), indicating that these sites are required for gene responsiveness (Yin et al., 2021) and supporting a role for H2K121ub in the modulation of accessibility at these sites ( Figure 3). Accordingly, the presence of H2AK121ub marks has been positively linked to gene responsiveness, whereas the presence of H3K27me3 marks showed a weak negative association (Kralemann et al., 2020). Interestingly, the histone demethylase RELATIVE OF EARLY FLOWERING 6 (REF6) (Lu et al., 2011) binds to and removes H3K27me3 preferentially from genes marked with H2AK121ub (Kralemann et al., 2020), further supporting the possibility that H2A monoubiquitination creates a partially repressed state that enables a quick response to stimuli. All together, these data also indicate that PcG regulation of only-H3K27me3 marked genes may involve a different mechanism. Accordingly, PRC1-independent H3K27me3-marked genes are already repressed before PRC2 recruitment, unlike PRC1-dependent genes that require PRC2 for repression (Kralemann et al., 2020) (Figure 3). On the other hand, it has been proposed that H2AK121ub should be removed from H2AK121ub/H3K27me3-marked genes after recruitment of PRC2 to maintain a stable repression, placing UBP12/13 as key factors to remove these marks (Kralemann et al., 2020;Hinsch et al., 2021). To verify this possibility, it would be necessary to identify direct targets of UBP12/13 and to determine whether these proteins affect PRC1 integrity and activity or whether they directly deubiquitinate H2A. Targeting PcG complexes in plants Despite the fact that PcG activities are essential for controlling the transcriptional state of genes at a particular stage, time, or condition, none of the PcG core components are able to bind specific DNA sequences; therefore, this regulation may require different intermediaries that attract PcG complexes to specific target genes. In Drosophila, multiple TFs acting in combination have been shown to recruit PcG complexes to cis-elements in small genomic regions called Polycomb response elements (PREs) (Kassis and Brown, 2013;Steffen and Ringrose, 2014). In vertebrates, the recruitment of PcG complexes is broadly related to unmethylated CpG-rich DNA regions that are proximal to promoters and are known as CpG islands (Deaton and Bird, 2011). These regions are recognized and bound by the zincfinger-CXXC domain of KDM2B and by the winged helix domain of PCL1/2/3 (Blackledge et al., 2014;Li et al., 2017b). Other TFs and histone-modification binding associated proteins also collaborate in the recruitment of PcG complexes (Blackledge and Klose, 2021). Moreover, certain PcG complexes transiently interact with TFs for target site recognition in certain contexts (Blackledge and Klose, 2021). On the other hand, as an alternative to TFs, long non-coding RNAs (lncRNAs) have been Before the recruitment of PcG complexes, PRC1-dependent genes are active, whereas PRC1-independent genes are repressed. Once PRC1 is targeted to active genes, chromatin becomes less accessible, and the transcription is downregulated. PRC2 then recognizes H2AK121ub marks and PRC1 components and mediates the transcriptional repression of these genes by promoting an inaccessible chromatin state, which is responsive to reactivation. Repressed genes are targeted only by PRC2, which maintains an inaccessible chromatin state. involved in targeting of PcG complexes and, more recently, RNA-DNA hybrid structures, which are known as R-loops (Chetverina et al., 2020;Blackledge and Klose, 2021), suggesting that different mechanisms can contribute to the specific tethering of PcG complexes. In Arabidopsis, the recruitment of PRC1 has been related to VAL1/2 factors. The B3 domain of VAL factors specifically recognizes RY elements (CATGCA) in the regulatory regions of several target genes (Suzuki et al., 2007;Q€ uesta et al., 2016;Wu et al., 2018a;Sasnauskas et al., 2018). In fact, as mentioned previously, several lines of evidence suggest that VAL factors are PRC1-associated proteins Q€ uesta et al., 2016;Mikulski et al., 2021). In addition to the B3 domain, VAL1/ 2 factors also contain a plant homeodomain-like (PHD-L) domain, a cysteine-and tryptophan-rich zinc finger domain (CW), and an ethylene-responsive element binding factor-associated amphiphilic repression (EAR) domain (Suzuki et al., 2007). The PHD-L has been shown to participate in the homo-or heterodimerization of VAL1 and VAL2 and has also been proposed to act as a reader of H3 methylation states, like the CW domain (Hoppmann et al., 2011;Yuan et al., 2016). The EAR domain is involved in the interaction with TOPLESS (TPL)/TPL-RELATED (TPR) 1-4 corepressors or SAP18, which in turn recruit HDA activities (Kagale and Rozwadowski, 2011). Accordingly, VAL1/ 2 have been reported to interact with HDA activities (Zhou et al., 2013;Zeng et al., 2020). Together, these data indicate that VAL1/2 binding to chromatin involves, in addition to ciselements, interaction with histone modifications. Furthermore, VAL factors are able to recruit other histone-modifying activities (Zhou et al., 2013;Zeng et al., 2020;Baile et al., 2021), suggesting that they can act as platforms for the simultaneous assembly of different epigenetic mechanisms. Apart from the VAL factors, the AL proteins have been proposed to bind H3K4me3 through their PHD domain and to recruit PRC1 (Molitor et al., 2014;Peng et al., 2018). Interestingly, most AL proteins can also bind to the conserved cis-element GNGGTG/ GTGGNG (ALFIN1 elements; Bastola et al., 1998;Wei et al., 2015), raising the possibility that PRC1 could be tethered to specific sites by the AL proteins independently of VAL1/2. In support of this possibility, the promoters of genes upregulated in the bmi1abc mutant are enriched in ALFIN1 elements (Merini et al., 2017). The existence of Arabidopsis PRE-like sequences containing ciselements for the binding of TFs has been proposed in several independent studies (Berger et al., 2011;Lodha et al., 2013;Mu et al., 2017;Xiao et al., 2017). Moreover, two key families of TFs, the class I BPC TFs and the ZINC FINGER TFs, co-localize with PRC2 at thousands of loci, and loss or reduction of their function causes a loss of PRC2 binding (Xiao et al., 2017). A recent report described the in vivo mediation of TF binding to a synthetic locus whose promoter lacked any of the known cis-elements involved in PcG recruitment. Interestingly, the results implicated some of these factors in recruiting one or the other PcG complex in Arabidopsis (Baile et al., 2021). These findings showed that the binding of VAL1 leads to the incorporation of H2AK121ub and H3K27me3 and the removal of H3 acetylation marks ( Figure 4A). Whereas BMI1 proteins directly interact with VAL1, PRC2 marking requires both PRC1 activity and VAL1, indicating that different interactions collaborate in PRC2 recruitment. Interestingly, SAP18 and HDA activities co-purify with VAL1 and the PRC2-associated proteins VIN3 and VRN5 (Q€ uesta et al., 2016), suggesting a connection between HDACs and PRC2 via some of these proteins. Furthermore, this report showed that PRC2 activity could be recruited independently of PRC1 by the binding of TFs from different families that, like the VALs, contain an EAR domain ( Figure 4A). The EAR domains have been shown to bind SAP18 and in turn to recruit HDA activities. The binding of these EAR factors also leads to the removal of H3 acetylation marks (Baile et al., 2021). These results are consistent with the existence of PcG targets in which the PRC2 recruitment is dependent or independent of PRC1 activity. Moreover, they suggest that the EAR-SAP18 interaction acts as a link between PRC2 and HDACs to mediate gene repression. Finally, lncRNAs have been associated with PcG repression in plants. For instance, several lncRNAs generated from the FLC locus have been implicated in the recruitment of PRC2 and FLC repression (Costa and Dean, 2019). COLDAIR sense lncRNA interacts with CLF (Heo and Sung, 2011), COLDWRAP is an FLC promoter-associated lncRNA that also interacts with CLF to form a repressive intragenic chromatin loop (Kim and Sung, 2017), and COOLAIR antisense lncRNAs recruit CLF indirectly via an RNA binding protein called FLOWERING CONTROL LOCUS A (FCA) (Tian et al., 2019). In addition, lncRNA-4, an intronic lncRNA from the floral homeotic AGAMOUS (AG) gene, is expressed in leaves and interacts with CLF to deposit H3K27me3 histone marks into the AG locus (Wu et al., 2018b) ( Figure 4B). In addition to lncRNAs, recent studies have indicated that R-loops may also support PcG target site recognition. For example, the lncRNA APOLO mediates the formation of R-loops by sequence complementarity with its targets, which attract LHP1 (Ariel et al., 2020) ( Figure 4C). Interestingly, through the analysis of a COOLAIR-induced R-loop at the 3 0 end of FLC, a recent report characterized the mechanism by which resolution of a nascenttranscript-induced R-loop promotes chromatin silencing . Stabilization of a COOLAIR-induced R-loop at the 3 0 end of FLC is mediated by NDX, which inhibits further antisense transcription (Sun et al., 2013). This enables the RNA binding protein FCA and other 3 0 -end processing factors to polyadenylate the nascent antisense transcript, which clears the R-loop and recruits chromatin modifiers to remove H3K4me1 and H3K36me3 and promote H3K27me3 accumulation . Also, NDX has been recently shown to co-purify with PRC1 components and to be required for H2AK121ub accumulation and H3K27me3 incorporation at the FLC nucleation region (Mikulski et al., 2021). Therefore, it will be important to establish whether this NDX-PRC1 association points to an additional NDX/R-loop interaction in the FLC nucleation region or is the result of the gene loop. In summary, TFs and specific combinations of their binding sites play an important role in the recruitment of PcG complexes to chromatin in plants, and epigenetic modifications of histones and RNA-protein interactions can stabilize interactions between the complexes and chromatin. The role of PcG proteins in shaping chromatin structure in plants PcG proteins are also related to a higher-order level of chromatin organization. PcG proteins form nuclear bodies (Polycomb bodies), suggesting that parts of the genome bound by PcG proteins gather together in the nucleus to interact, to share common machinery, and to create local concentrations of specific factors (Pirrotta and Li, 2012). Accordingly, PcG proteins are able to fold chromatin at multiple scales by establishing local loops, compacting chromatin domains, and mediating long-range interactions among H3K27me3-associated domains ( Figure 5) (Cheutin and Cavalli, 2019). The formation of chromatin domains covered by H3K27me3 and the interaction among these domains shapes a chromatin network that seems to be crucial for PcG function in animals. Recent reports have shown that, in animals, cPRC1 plays a crucial role in creating these domains and in establishing the interaction among them (for a review, see Guo et al., 2021). This cPRC1 ability relies mainly on Drosophila Psc and Ph and vertebrate CBX2 and PHC (Grau et al., 2011;Isono et al., 2013;Wani et al., 2016), but it seems to be independent of cPRC1 monoubiquitination activity, as it persists when RING1B catalytic activity is impaired (Boyle et al., 2020). The IDR of CBX2, which shares biochemical and functional properties with Drosophila Psc-CTR (Grau et al., 2011), and the SAM domain of Ph/PHC, which mediates head-to-tail oligomerization of cPRC1 (Isono et al., 2013), confer this ability. Accordingly, loss of these cPRC1 proteins leads to the dissolution of Polycomb bodies (Isono et al., 2013;Wani et al., 2016;Tatavosian et al., 2019;Plys et al., 2019) ( Figure 5). Furthermore, CBX2 and PHC are able to mediate liquid-liquid phase separation (LLPS) that underlies the formation of condensates (Grau et al., 2011;Seif et al., 2020) and has been proposed to induce chromatin compaction and segregate repressed chromatin from transcriptional machinery. Interestingly, data have suggested that H3K27me3 may not be the seeding site for CBX2-mediated LLPS, as H3K27me3 does not prevent the formation of CBX2 condensates in live cells. However, removal of H3K27me3 greatly reduces the bound level of other CBXs, such as CBX7 or CBX8, to chromatin, indicating that they play a different role (Tatavosian et al., 2019). By integrating these and other results, a model has recently been proposed to explain the function of PcG complexes in the mediation of higher order chromatin organization. The model is known as the scaffold-adaptorclient phase separation: CBX2-PRC1 is the scaffold, CBX7-PRC1 is the adaptor, and H3K27me3-marked chromatin is the client. In this model, CBX7-PRC1 recruits H3K27me3-marked chromatin into established CBX2-PRC1 condensates through interactions between CBX7 and H3K27me3 and polymerization of PHC between CBX2-PRC1 and CBX7-PRC1 (Kent et al., 2020). In addition, the ability of BAH-containing proteins to bind H3K27me3 (for instance, BAHCC1; Fan et al., 2020) suggests that these proteins contribute to establishing bridges between surrounding chromatin domains covered with H3K27me3. However, the interplay between these and other factors remains to be dissected. In plants, several studies have suggested that H3K27me3 is a key contributor to chromatin topology. Local interaction of H3K27me3 domains is reduced in the Arabidopsis clfswn double mutant background (Feng et al., 2014). In addition, H3K27me3 is enriched at long-distance interacting loci across the Arabidopsis genome Huang et al., 2021). However, it remains unknown whether any plant PcG proteins help to shape these interactions. EMF1 has been reported to show structural and functional similarities to Drosophila Psc-CTR and the IDR of CXB2 in vitro (Grau et al., 2011;Beh et al., 2012). Moreover, a region in the center of EMF1 is required for the formation of Polycomb bodies . Therefore, it may participate in mediating phase-separated condensates, which may help to compact chromatin, promote H3K27me3 marking, and establish interactions between H3K27me3-marked domains. In addition, LHP1 contains a disordered hinge region that, when perturbed, causes the Polycomb bodies to be disrupted (Berry et al., 2017), suggesting that LHP1 participates in PcG condensate formation in plants. Moreover, as in animals, the ability of the CHROMO domain of LHP1 and the BAH domain proteins EBS and SHL to bind H3K27me3 could mediate interactions among H3K27me3 domains in plants. In addition, PWO1-4 proteins are proposed to recruit PcG proteins to Polycomb Body Chromatin compaction Long-range interactions Figure 5. PcG proteins and chromatin structure in plants. Different PcG proteins, such as EMF1, LHP1, or BAH proteins, may promote the formation of H3K27me3-marked domains, chromatin compaction, long-range chromatin loops, and phaseseparated Polycomb bodies, which contribute to the maintenance of genes in a stable repressed state. subnuclear domains and to participate in chromatin compaction (Hohenstatt et al., 2018). In summary, although further work is required to explore the potential roles of these and other proteins in the mediation of plant 3D chromatin structures, the fact that Pc/CBX2 and Psc-CTR activities are linked to PRC2 instead of PRC1 in plants suggest differences in the way these interactions are established and maintained. CONCLUDING REMARKS AND PERSPECTIVES Although our understanding of the PcG repressive mechanism in plants has increased significantly in recent years, many gaps remain. It is becoming clear that PcG complexes regulate gene expression at multiple levels; however, to understand how these different levels of regulation are achieved in plants, we need to identify all the players involved in the system. On the one hand, it is not known whether different plant PRC1 E3 modules exist and whether the BMI1-L truncated proteins can associate in a PRC1. Because the BMI1-Ls do not contain the RING domain, their role in PcG-mediated repression may be independent of H2A monoubiquitination. An intriguing possibility is that they play a role in creating Polycomb domains and establishing interactions among them, similar to animal cPRC1, which displays this function independently of E3 monoubiquitin ligase activity. In addition, compared with animals, the number of identified PRC1 accessory proteins in plants is still not very large; moreover, it is not known whether these or other unknown accessory proteins associate with different E3 modules constituting different plant vPRC1s. On the other hand, even though many more PRC2 accessory proteins are known now than some years ago, in most cases we still do not know whether they associate with different PRC2 cores constituting different PRC2 sub-complexes and whether these sub-complexes are involved in different levels of regulation. Addressing all these questions would require performing AP-MS experiments in different cell types and conditions. Interestingly, although PRC1 and PRC2 enzymatic activities are conserved between animals and plants, several animal PRC1associated activities have been linked to PRC2 in Arabidopsis. These activities participate in reading and propagating H3K27me3 marks or in mediating chromatin compaction, such as the activities displayed by LHP1 and EMF1. A possible explanation for this finding could lie in the different genomic distributions of H2AK121ub and H3K27me3 marks between animals and plants. In Arabidopsis, H2AK121ub marks are enriched at what could be considered the PcG nucleation region of target genes, whereas H3K27me3 marks co-localize with H2AK121ub at these regions but also extend over gene bodies. By contrast, these two marks co-localize over large regions in animals. Therefore, the spreading of H3K27me3 and chromatin compaction abilities may well be associated with PRC1 in animals, as the two complexes follow the same pattern and maintain a feedforward loop. However, these activities should be linked to PRC2 in plants, as PRC1 and PRC2 markings do not follow the same pace. Interestingly, a considerable number of PRC1 and PRC2 accessory proteins are plant specific, indicating that in plants the PcG system has been re-invented, incorporating a different array of proteins. Nevertheless, similar activities have been recruited, demonstrating the effectiveness of the combination for the repressive mechanism. Finally, it will be important to extend our understanding of the Arabidopsis PcG system to other plants species with larger genomes to determine whether or not the recruitment of the complexes and their roles in shaping chromatin 3D structure follow the same rules.
2021-11-28T16:39:35.592Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "757c6add5b9dceeca4e0d64679ca1c248ffae995", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xplc.2021.100267", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f188174deca3d1b0589c316edb14bfa479f1f39", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
219768349
pes2o/s2orc
v3-fos-license
Where we are in fighting against COVID-19 On March 11, 2020, the World Health Organization announced the novel coronavirus disease (COVID-19) as a pandemic. Despite an increasing number of international attempts using maps to present and communicate COVID-19-related information in different organizations, most map products have only used the presentation function of maps. Against this backdrop, we offer an automatically daily-updated, color-blind-friendly, Tableau-based interactive dashboard to demonstrate where and how different countries are fighting against COVID-19. The dashboard allows users to specify countries they want to compare and aggregate relevant data on a daily, weekly, or monthly basis. of online mapping platforms and the accessibility of COVID-19-related data, there is an increasing number of international attempts with maps to present and communicate COVID-19 relevant information in different organizations. For example, Dong et al. (2020) from John Hopkins University, utilizing ArcGIS Online with crowdsourced data, presents one of the first online dashboards related to the global COVID-19 outbreak. However, most map products have only used the presentation function of maps. To understand changing patterns, it is essential to consider the analytical reasoning facilitated by interactive (geo)visual interfaces (Kraak, 2020). Against the backdrop, we present a Tableau-based interactive dashboard (https://bit.ly/ ECDCCOVID19) to compare and contrast where people are fighting against the COVID-19 around the world. When designing the dashboard, we mainly took the following technical and cartographical elements into consideration. First, the data are downloaded automatically from the European Center for Disease Prevention and Control and refreshed in the Tableau platform, rather than processed manually. Second, while users can select as many countries or territories as they want for the comparison in this interactive dashboard, we set the dashboard to display no more than five items, based on the highest number of death cases, to reduce the recognition load. Third, about one in 12 men are color-blind. In the most common type of red-green color-blindness, people cannot distinguish between the two (National Eye Institute, 2019). Therefore, we chose all color-blind-friendly colors in this dashboard. Fourth, while the dashboard is refreshed daily, the user can aggregate the data on a daily, weekly, or monthly basis. For up to five countries or territories that a user specifies, the dashboard illustrates (1) the number and accumulated number of new confirmed/death cases after the first confirmed case; and (2) the total confirmed/death cases per one million population. Figure 1 displays how such geovisual analytics can be applied to show weekly trends and patterns. Figure 1(a) displays the top five countries with the highest mortality so far, as of May 11, 2020. All five countries have high Gross Domestic Product (GDP), and the United States of America, which has the highest GDP in the world, grieved the highest number of accumulated cases and deaths (The World Bank, 2020). Four of the top five countries are in Europe, and three of them are adjacent to each other. Specifically, the number of cases started to expand in the fifth week in Italy, the sixth week in Spain, and the seventh week in France. However, those numbers began to decline in the four European countries in the ninth to the 11th week. In the US, the number of deaths reached a peak in the 12th week, but the case and death numbers are remarkably higher than in other countries. Figure 1(b) compares the four Asian countries that have been impacted by COVID-19 since January 2020. Japan and Singapore had a small number of cases and deaths in the first 10 weeks, and both reached a peak in the 13th week. The number of deaths in Japan also started to increase in the 14th week. South Korea and mainland China have a similar pattern that both countries reached a peak of outbreaks in their sixth week after the detection of their first cases. However, a significant increase in deaths was recorded in China in the 15th week due to new data received in Wuhan (The State Council of the People's Republic of China, 2020). Aggregated data for the latest week/month should be interpreted with caution as the dashboard may not include data for the entire week/month. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2020-06-04T09:06:36.013Z
2020-05-31T00:00:00.000
{ "year": 2020, "sha1": "f55412d528394f5fd02bd7e8abece29b648228b8", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0308518X20931515", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "432c270c25e296b554bf15cf3cc60c7320467fda", "s2fieldsofstudy": [ "Computer Science", "Political Science" ], "extfieldsofstudy": [ "Geography" ] }
231985622
pes2o/s2orc
v3-fos-license
End-to-end neural network approach to 3D reservoir simulation and adaptation Reservoir simulation and adaptation (also known as history matching) are typically considered as separate problems. While a set of models are aimed at the solution of the forward simulation problem assuming all initial geological parameters are known, the other set of models adjust geological parameters under the fixed forward simulation model to fit production data. This results in many difficulties for both reservoir engineers and developers of new efficient computation schemes. We present a unified approach to reservoir simulation and adaptation problems. A single neural network model allows a forward pass from initial geological parameters of the 3D reservoir model through dynamic state variables to well's production rates and backward gradient propagation to any model inputs and variables. The model fitting and geological parameters adaptation both become the optimization problem over specific parts of the same neural network model. Standard gradient-based optimization schemes can be used to find the optimal solution. Using real-world oilfield model and historical production rates we demonstrate that the suggested approach allows reservoir simulation and history matching with a benefit of several orders of magnitude simulation speed-up. Finally, to propagate this research we open-source a Python-based framework DeepField that allows standard processing of reservoir models and reproducing the approach presented in this paper. Introduction Reservoir simulation is a complex concept that typically includes a lot of steps ranging from construction of appropriate geological model to an estimation of field performance, e.g. oil production rates. However, once all the geological parameters (initial and boundary conditions) are set, and control parameters are given, the challenge is to simulate reservoir dynamics, i.e. estimate time-dependent variables (phase saturation, pressure, well's production rates, etc.). A standard approach is based on a system of hydrodynamic equations and its numerical evaluation (see, e.g. Chen, Huan and Ma (2006)). While the physical equations can be assumed as fixed, numerical solution methods become a matter of intense research. Straightforward implementation of finite-difference methods (i.e. discretization of physical equations) guaranties to provide a solution, but requires enormous computational costs in practical cases. Since the middle of the last century, investigations in parallelized solution schemes or more advanced computation algorithms in application to the petroleum industry become a separate research field. Since recently, implementation of machine learning methods into classical schemes is of special interest (see, e.g. Sun and Zhang (2020) for implementation details and Koroteev and Tekic (2021) for future perspectives). Modern reservoir simulation schemes provide an excellent approximation of field dynamics in synthetic cases (Kvashchuk, Klöfkorn and Sandve, 2019). However, due to natural uncertainties in the estimation of geological parameters in real-world fields, uncertainties in obtained solutions easily make them impractical. A process of adaptation in the space of initial and boundary conditions to align simulated data and actual production rates is known as history matching (HM). The problem is clearly ill-posed and thus assumes various adaptation strategies (see, e.g. Oliver and Chen (2011) for a review). Mathematically, HM is an optimization problem over a certain space of variables. The point is that simulated data are usually obtained with black-box simulation tools that restrict straightforward application of standard gradientbased optimization methods. Moreover, forward simulation usually requires high computational and time costs, and it complicates dramatically alternative approaches. As a result, HM has become a separate research field. One could substantially benefit from considering reservoir simulation and adaptation within a single framework. We present an approach that makes it possible and, moreover, uses the same optimization methods for the solution of both problems. We implement an end-to-end neural network model for reservoir simulation and production rates calculation. The model training phase can be considered as an optimization problem in the space of neural network variables given a dataset of simulated field scenarios. In the same way, given actual production rates, we consider HM as an optimization problem in the space of geological parameters. By construction, neural network models allow gradient backpropagation to any input and internal variables and one can apply common optimization algorithms for model fitting (see Goodfellow, Bengio and Courville (2016) for general theory). Of course, gradient-based HM is not a new idea (see e.g. Kaleta, Hanea, Heemink and Jansen (2011) or Gómez, Gosselin and Barker (2001)). The point is that in contrast to previous works, estimation of gradients does not require elaboration of separate models or substantial model reduction. Gradients in neural network models can be computed analytically and thanks to modern programming frameworks, there is no need to do it explicitly. Using a real-world oilfield model we demonstrate that this approach provides accurate solutions in reservoir simulation and adaptation with several orders of magnitude speed benefit in comparison to standard industrial software. Note that in previous work Illarionov, Temirchev, Voloskov, Gubanova, Koroteev, Simonov, Akhmetov and Margarit (2020) the HM problem was investigated only with respect to simulated dynamic state variables (pressures and phase saturations), while in this work, we consider the most practical problem given historical well's production rates. Of course, the suggested model is not aimed at the direct substitution of standard simulation software, which is based on finite-difference methods. While the last ones provide highly precise solutions at the cost of large computation time, the neural network approach allows faster approximation by means of reduced accuracy to some extent, which is expected in any proxy model. Dynamics module The standard approach to hydrodynamic simulation is to apply the finite-difference (or finite-volume) method to a set of partial differential equations (PDE) of multi-phase flow through a porous medium. Let represent static reservoir variables (computational grid, initial permeability and porosity fields, etc.). Reservoir state at time will be denoted as ( ) and contain pore pressure, gas content, oil, water and gas saturations. Let also denote production and injection schedules at time as ( ) and call them control variables. In the classical simulation, the control variable can be defined in many ways. We will assume it contains bottomhole pressures for all production wells and injection rates for all injection wells (at time ). The output of the simulation at time is the state at the next time step ( + Δ ). Thus, one step of the standard simulation process can be described as The example of classical hydrodynamic simulation with the finite-differences method is shown in Fig. 1. Now we move to description of the proposed model. Latent space dynamics The proposed reservoir simulation model is inspired by the Reduced Order Modelling (ROM) technique presented in Kani and Elsheikh (2018) and recently applied e.g. in Jin, Liu and Durlofsky (2020) and the Neural Ordinary Differential Equations (neural ODEs) introduced in Chen, Rubanova, Bettencourt and Duvenaud (2018). For brevity we call the model Neural Differential Equations based Reduced Order Model (NDE-b-ROM). The idea is to translate the reservoir dynamics into a latent space using encoder-decoder NNs and reconstruct latent space dynamics using differentiable neural ODE. This gives a more flexible approach in contrast to linear decomposition models (e.g. Dynamic Mode Decomposition (Kutz, Brunton, Brunton and Proctor, 2016)) and can be compared to several non-linear models based on NNs proposed in Temirchev, Simonov, Kostoev, Burnaev, Oseledets, Akhmetov, Margarit, Sitnikov and Koroteev (2020) ;Watter, Springenberg, Boedecker and Riedmiller (2015); Banijamali, Shu, Ghavamzadeh, Bui and Ghodsi (2017). More precisely, we approximate reservoir dynamics in a space of compressed (latent) reservoir state representations ( ). In the following description we will use bold letters (e.g. , ) to indicate functions that are unknown a priory and are specified during the model training stage. Of course, after the model is trained, these functions are considered as completely defined. Thus we assume an existence of mappings ∶  →  and ∶  →  between full-order states and its latent representations such that the composition • is close to the identical operator. Latent space dynamics is assumed to be governed by an ODE of the form: were (⋅) is some non-linear function,̂ and̂ represent latent control and latent static variables of an oilfield. Latent control and static variables are assumed to be obtained from mappings ∶  → and ∶ Θ →Θ respectively. The simulation process starts with an initial reservoir state (0) and requires well control schedule ( ) and static information . The next reservoir states ( ) are obtained iteratively as follows:: • Encode initial state (0) = ( (0)), static variableŝ = ( ) and control̂ ( ) = ( ( )) for all ; • Solve the latent ODE for a required period of time using any appropriate numerical scheme. For example, using an explicit integration scheme: ( + Δ ) = ( ) + Δ ⋅ ( ( ),̂ ( ),̂ ). As a result we obtain the latent solution ( ) for all ; • Decode the latent solution: ( ) = ( ( )) for all . The overall structure of the proposed process is presented in Fig. 2. Neural Network Architecture The introduced mappings , , and are represented by fully-convolutional NNs (Long, Shelhamer and Darrell, 2015). A benefit of the fully-convolutional architecture is a natural scalability of the model. In the context of reservoir simulation it allows processing of oil fields of different sizes. Mappings , , (encoders) are approximated by 4-layer fully-convolutional NNs. The dimensionality reduction is controlled by the stride parameter of convolutions. The mapping (decoder) is approximated by a similar NN without strides. Instead, the decoder should increase the dimensionality of a latent variable. It is achieved by the use of a 3-D analog of the Pixel Shuffle method (Shi, Caballero, Huszár, Totz, Aitken, Bishop, Rueckert and Wang, 2016) (we call it Voxel Shuffle). The function is approximated by a simple 2-layer convolutional network. All the modules use Batch Normalization (Ioffe and Szegedy, 2015) and Leaky ReLU non-linearity (Maas, 2013). This architecture is a result of the compromise between the model depth and ability to fit into a limited GPU memory when training on large reservoir models with a large number of timestamps. Of course, there are many internal parameters in each layer that might require specification. In order to provide the full reproducibility on the model we open-source Here , 0 and 0∶ denote reservoir static variables, initial state and control parameters for the time interval (0 ∶ ). These variables pass through the encoder (which is specific for each variable) and are mapped into the latent space variables denoted aŝ , 0 and̂ 0∶ . Solving the ODE (2) in the latent space we obtain a set of latent solutions 1 , 2 , etc. Decoding latent states we obtain a solution in the initial space and show only some slices of the 3D cubes obtained. the code of the model as well as any details of data processing and model training steps in the GitHub repository https://github.com/Skoltech-CHR/DeepField. It should be also noted that in contrast to the standard downscaling-upscaling procedures we do not expect substantial information leakage about reservoir heterogeneities in the latent space. The point is that during the model training stage the encoder-decoder pairs are optimized in a way that their composition acts as the identity transform. Training procedure NDE-b-ROM is trained end-to-end in a supervised manner. This requires to have a training dataset  that gives examples of true dynamics evolution. In supervised learning, dataset comprises a set of pairs { , } =0 . Here is known information about -th reservoir: while is a target containing all the information that should be predicted by a model: and is the dataset size. Commonly, datasets of sufficient size are not available due to technical and commercial subtleties. To overcome this problem we take several hydrodynamic models of real oilfields, randomize them and perform simulation of state (target) variables using classical reservoir simulators. Randomization is made in two steps: • Create new initial state and static variables by adding a small amount of correlated zero-mean Gaussian noise. Create a new bottomhole pressure schedule by sampling from some distribution (see the discussion below). This step generates . • Feed the generated initial data into a classical finite difference hydrodynamic simulator and get true dynamics . The choice of good generative schemes is a non-trivial problem and requires a separate investigation. For simplicity, we applied Gaussian randomization for static variables and initial states. We use correlated noise in order to vary large-scale field properties rather than to simulate even larger uncertainties about values in particular grid cells. For bottomhole pressure a hand-crafted scheme was used: Here { } 5 =0 are random numbers from a uniform distribution; 5 is the only component which is resampled at each time step. Proposed generative scheme gives us realistic initial and boundary conditions as well as a bottomhole pressure schedule (see Fig. 3). Rates module Inflow from the cell connected with perforated well segment is calculated on the basis of standard relation between inflow and pressure drop: Here ( ) is a volumetric inflow of a phase from the -th cell to the well at time , denotes connection productivity index, is total mobility of the phase , -pressure in the -th cell, -pressure on the well-cell interface. In case of well parallel to one of the coordinate axis, connection productivity index are calculated explicitly. First we find the effective value of the product of formation permeability and formation thickness ℎ: here 1 , 2 are permeabilities in the directions perpendicular to the well, ℎ is length of perforated well segment in the -th cell. Then for the connection index we have where is equivalent radius (Peaceman radius, Peaceman (1978)), is a well radius. Equivalent radius 0 is calcu-lated as where 1 , 2 denote sizes of -th block in directions perpendicular to the well. If a well has arbitrary trajectory it is approximated as a piecewise linear one; for each linear segment projection onto coordinate axis are calculated and connection productivity indices are calculated separately for each projections. Thus we obtain projections of the connectivity index onto coordinate index , , . To obtain a resulting connectivity index for the cell we summarize these indices: The described scheme of rates calculation as well as the dynamics modules are implemented using PyTorch framework (Paszke, Gross, Massa, Lerer, Bradbury, Chanan, Killeen, Lin, Gimelshein, Antiga, Desmaison, Kopf, Yang, DeVito, Raison, Tejani, Chilamkurthy, Steiner, Fang, Bai and Chintala, 2019). This enables automatic gradient propagation through rate calculations and makes it suitable for optimization problems such as history matching. Adaptation scheme Implementing reservoir simulation model as an end-to-end differentiable neural network, any set of model parameters can be considered as a space for adaptation. Moreover, standard adaptation goal (minimization of a difference between simulated and observed data) can be naturally extended with regularization terms that, e.g., penalize materialbalance violation or correction amplitudes. A detailed investigation of various sets of parameters in combination with regularization terms should be a matter of separate research, in this paper we present rather proof-of-concept results and discuss further research options. Following a common HM approach we consider adaptation in the space of rock parameters (porosity and permeability) and extend it with the auxiliary space of connection productivity indices. Gradient backpropagation through the neural network model allows sensitivity estimation for each individual grid cell block. To avoid the undesired overfitting, we require that changes in a cell block should be correlated with neighboring blocks and penalize large amplitudes using 2 regularization. Up to some extent this regularization hinders the capability of the neural network to model various faults. However, providing the model with an additional 3D tensor describing the distribution of fault should help to take this information into account. We attribute this investigation to future research. Technically, we split initial grid into small cubes of four cell blocks in each direction (of course, one can vary cube sizes to perform adaptation at different spatial scales). Each cube attributes to a single additive rock correction factor, initialized with small-amplitude zero mean random noise (we found this initialization works better than constant zero initialization). These correction factors will be adjusted during HM and propagate back to cell blocks of the initial grid through bilinear upsampling. To include connectivity indices in a space of adaptation parameters we multiply (6) by additional connectivity correction factors. In order to ensure that connectivity correction factors remain non-negative during adaptation, we will vary its logarithms instead of the connectivity correction factors itself. Logarithms are initialized again with small-amplitude zero mean random noise. The adaptation process works as follows. We iteratively pass the adaptation time interval with time steps of a fixed size. At each step, we calculate predicted production rates and calculate a loss function that penalties a difference between predicted and target values. Based on the loss function value, we compute gradients with respect to rock and connectivity correction factors and accumulate the gradients. When the time steps reach the end of the adaptation interval, correction factors are updated according to the accumulated gradients and the Adam optimization scheme (Kingma and Ba, 2014). Then the gradients are set to zero, and the next iteration begins. Total loss function at each iteration is defined as the aggregated loss over all steps. Iterations stop when the total loss stops to decrease substantially. Running the model several times and varying parameters of the Adam optimization algorithm, we find that increasing of learning rate to 0.3 provides better and faster convergence. Also, the weight decay parameter is set to 5 × 10 −4 , which penalizes large amplitudes of correction factors. Reservoir model For numerical experiments we used a synthetic model of the one of Western Siberia oilfield. The model grid is represented in corner point geometry with 145 × 121 × 210 cells, about 1.3M of which are active. In the Fig. 5 structure of the model as well as porosity distribution are presented. Fig. 6 shows oil and gas saturation distributions. The oilfield has anticlinal shape with gas cap in the top and oil fringe under the gas cap. The complex structure of the model with both gas cap and underlying water make it very sensitive to modelling accuracy. The simulator has to be capable of proper simulation of water and gas coning effects. The hydrocarbons are recovered with the use of 64 production wells located in both oil and gas areas. Due to the early stage of reservoir recovery, only a small fraction of wells has more or less complete production history. We carefully selected a time interval and a set of wells involved in this research to eliminate low-quality records and, in particular, to eliminate wells with only fragmentary history available. The reason for this preprocessing is as follows. It is clear that data quality is essential for accuracy of adaptation results. However, it is rather difficult to separate the impact of data quality and model capability itself. The aim of this paper is to demonstrate the generic approach, while application to different reservoir models with specifically complicated history data might require a customization of data preprocessing steps. The latter discussion is out of the scope of this paper. Finally, we use a set of 12 wells and a time interval of 1.5 years. Each well has daily recorded historical oil, water and gas production rates as well as bottomhole pressure. The recorded bottomhole pressure is used as control parameter in reservoir simulation. An example of recorded history for one of the wells is presented in the Fig. 7. Results In this section, we provide results of an experiment in which rock parameters are varied on a grid downsampled by factor 4 with respect to original grid sizes. Note that downsampling is applied not due to resource limitation but as a natural regularization for HM. The loss function for HM is defined as mean squared error (MSE) between predicted and historical (target) rates for each well and aggregated over all wells and fluid phases (gas, water, oil). To normalize substantially different scales in production rates of various fluid phases, we apply logarithmic calibration before computing the MSE. Also, we apply a linear time-weighting function that increases an impact of error with time progressing. An intuition behind this weighting is that errors in recent rates are more important in comparison to more time-remote errors. Fig. 8 shows the total loss function decrease against iterations. After 150 iterations loss stops to decrease and begins to fluctuate near a constant value. We stop the adaptation process at this moment. Fig. 9 and Fig. 10 show a sample horizontal slice of the cube of normalized porosity and x-permeability. Note that due to weight regularization the adaptation process affects only a small region around the production wells. In contrast, a model without regularization makes changes even in areas remote from production wells, which is less physically sound (see Fig. 11 for comparison). The next Fig. 15 shows how much are the changes in adaptation parameters introduced by the HM. On average, rock variables obtain a small negative bias -0.01 (significant statistically). However, we find that the initial phase content (i.e. porosity multiplied by phase saturation and cell volume) changes only by less than 1%. Quite interesting, we find that almost all connectivity correction factors are distributed near 0 and 1 (being unit initialized). Since the connectivity correction factor is multiplicative, value 1 means no correction is applied, while 0 corresponds to a effectively closed cell's perforation. Note that this result requires a separate detailed investigation in order to avoid physically irrelevant situations when e.g. several well's block are substituted with a single block of increased connectivity index. We admit that additional regularization terms might be proposed to control the distribution of connectivity correction factors. Fig. 13 shows a comparison of target and simulated cumulative production rates. Note that the time interval is split into two parts. The first one shows a comparison within the adaptation period. The second one demonstrates a prediction against historical values. We observe that the model is possible to reproduce the historical values given in the adaptation period and can be used for forecasting on an interval that is at least half of the adaptation period length. The same plot but for a sample well is shown in Fig. 14. We observe that predicted values partially go off-track, e.g. for the water. We address this issue to the current limitation of the neural network model that supports only the limited scope of features and events given in the reservoir history data. More detailed technical description is available in the documentation that supports the open-sourced code. Fig. 15 shows a correlation diagram between predicted and target cumulative production rates over all wells. We observe that for each phase (water, gas, and oil), the correlation coefficient ( value) is 0.94 or above. This indicates that the adaptation process successfully matches the production rates of individual wells. In Fig. 16 we provide a gas/oil a ratio computed according to daily simulated and historical gas and oil production rates. Since the gas/oil ratio is a critical parameter in the reservoir recovery management, we find that predicted values are in partial agreement with actual historical values. To demonstrate the role of rock and connectivity correction factors, we exclude from simulation either rock or connectivity correction factors and compare the simulation with target values and a simulation where both factors are included. One can note in Fig. 17 that each set of adaptation variables is meaningful, and its combination gives substantially better results. Finally, we demonstrate that adaptation only in the space of rock correction factors (without including connectivity factors) limits the model quality. Indeed, while cumulative production rates over all wells shown in Fig. 18 are compatible with previous Fig. 13, correlation between individual wells become substantially worse (compare Fig. 19 and Fig. 15). We conclude that adaptation in the joint space of rock and connectivity factors allows better matching for individual wells. One can also conclude from the last example that for evaluation of different adaptation models total production rates can not be a single benchmark and additional metrics should be considered as well. Conclusions We presented an end-to-end neural network approach that allows reservoir simulation and history matching with standard gradient-based optimization algorithms. The neural network model has initial geological parameters of the 3D reservoir model in the input. In the output, the model returns wells' production rates. By construction, neural network models allow gradients propagation to any internal and input variables. Using a dataset of development scenarios, we train the neural network to simulate general geological relations. In this case, internal network variables are optimized. Using historical records on real production rates and bottomhole pressure, we solve the history matching problem. In this case, initial rock parameters and connectivity indices were optimized. As we demonstrated, the final model allows reliable simulation of historical production rates and forecasting of reservoir dynamics. It should be noted that the suggested neural network approach is not to replace standard industrial reservoir simulation software. The goal is to obtain a substantially faster simulation tool, probably at the cost of acceptable accuracy decrease. In this research, we consider the reservoir model of about 3.7M total grid cell size and about 1.3M of active cells. Simulation of daily production rates for 1.5 years time interval takes about 1 minute (using modern GPU workstation). Note that the computation also includes a complete simulation of pressure and phase saturation cubes. This result is several orders of magnitude faster in comparison to current industrial reservoir simulation software. The adaptation process for one-year period takes about 3 hours. The neural network approach opens a broad and convenient way for implementation of many reservoir simulation and adaptation strategies. The point is that one can easily combine variables to be optimized during HM. For example, in this research, we consider a joint adaptation in the space of rock parameters and connectivity indices. Also, the HM and forward simulation problems can be naturally extended by additional regularization terms, including a control for proper conservation of physical parameters such as mass of components. Investigation and comparison of the various experiment settings is a matter of future research, which looks optimistic taking into account proof-of-the-concept results demonstrated in this paper. This research is completely based on the source-code available in the GitHub repository https://github.com/Skoltech-CHR/DeepField.
2021-02-23T02:15:35.807Z
2021-02-20T00:00:00.000
{ "year": 2021, "sha1": "c8f01b35f48f89c073135a3d9f2516d4475def66", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.10304", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c8f01b35f48f89c073135a3d9f2516d4475def66", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
3796666
pes2o/s2orc
v3-fos-license
Spleen-Preserving Surgery in Splenic Artery Aneurysm Endovascular interventions are increasingly used in the treatment of a splenic artery aneurysm (SAA), which is a rare and life-threatening clinical disorder. However, in cases of SAA rupture, minimally invasive interventions are unsuitable, and open surgery remains the gold standard method. In open surgery, care should be taken to preserve the spleen and its immune function in cases where an arterial segment of sufficient length allows for reconstruction. An SAA was detected in a 51-year-old woman who presented to our polyclinic with left upper quadrant pain. An endovascular intervention was unsuccessful, and open surgery was performed. Approximately 5 cm of aneurysm in the middle segment of the splenic artery was treated by arterial anastomosis, and the spleen was preserved. The patient experienced no postoperative complications and remained asymptomatic at the seventh month of follow-up. The aim of this case report is to emphasize the importance of splenic sparing surgery in cases of SAAs. Introduction A splenic artery aneurysm (SAA) is a rare condition, in which the diameter of the splenic artery dilates to over 1 cm. SAA rupture is associated with relatively high mortality [1]. e risk of rupture is increased in cases of pregnancy, pseudoaneurysms, aneurysms with a diameter greater than 2 cm, portal hypertension, symptomatic SAAs, and liver transplantations. In the presence of the aforementioned factors, treatment should take place without delay [1,2]. ere is insu cient evidence on the best treatment for SAAs due to the retrospective nature of literature studies and low number of cases [2]. With advances in medicine, endovascular treatment (coil embolization or stenting) and laparoscopic surgery are increasingly used to treat SAAs. However, open surgery remains the gold standard and most frequently applied treatment [3]. In open surgery, the aneurysm is frequently resected with splenectomy. Due to the important immune function of the spleen, spleen-preserving surgery is recommended whenever possible [4]. In this study, arterial reconstruction was performed in a symptomatic SAA patient to preserve the spleen. Case Report A 52-year-old female presented to our polyclinic with increasing pain in the left upper quadrant in the previous month. She had been treated for hypertension and had a hysterectomy 7 years ago due to a myoma. e patient had quit smoking 3 years earlier and was taking alprazolam 0.5 mg/day for an anxiety disorder. A physical examination revealed minimal sensitivity of the left upper quadrant upon deep palpation and an incision scar from the gynaecologic surgery. Nothing else of note was detected. e patient's body mass index was 32.5 kg/m 2 , and laboratory measurements were within the normal range. Endoscopy and a colonoscopy conducted to determine the cause of pain revealed no pathological ndings. An ultrasonic evaluation revealed no evidence of an aneurysm. However, magnetic resonance imaging (MRI) was conducted due to the presence of a hyperechoic lesion (53 × 39 mm in size) in the right lobe of the subcapsular area of the liver. e MRI study revealed a 35 mm diameter hemangioma in the liver segment 8 and a 40 mm aneurysm with a thrombus in the splenic artery of the pancreas tail region ( Figure 1). Tortuosity of the splenic artery, which had a proximal diameter of 1 cm, was detected. In angiographic multislice computed tomography (CT), minimal aneurysmatic dilatation was detected in the celiac truncus in the midsegment of the splenic artery, in addition to saccular aneurysmal dilatation, 50% thrombosed and in the size of 45 × 40 × 39 mm adjacent to the pancreas. A long segment of the splenic artery transection extended from the distal aneurysm to the splenic hilus ( Figure 2). Following a consultation with the interventional cardiovascular department, an endovascular intervention was planned for the treatment of the aneurysm. Endovascular interventions made from the femoral region were inconclusive because of extreme angulation of the truncus celiacus, aneurysmal dilatation of the trunk, and tortuosity of the splenic artery. e patient was informed about the surgical procedure, and written consent was obtained. H. in uenza and pneumococcal vaccinations were performed. e gastrocolic ligament was opened by median laparotomy, and the aneurysm was revealed. In the middle segment of the splenic artery, approximately 5 cm of aneurysm, which was adherent to the upper pancreas and retroperitoneal area, was opened by controlling the proximal and distal ow of the splenic artery. Collateral ow was not monitored. e proximal and distal ends of the splenic artery were anastomosed to provide ow continuity (Figure 3). e posterior and inferior wall of the aneurysm could not be resected because of excessive adhesions. e duration of the surgery was 185 min, and blood loss measured intraoperatively was 200 ml. No complications were observed in the postoperative follow-up period, and the patient was discharged without any problems on the fourth day. e patient was asymptomatic at the sixth month of postoperative follow-up. In angiographic CT, splenic artery ow was normal, and the residual aneurysm had regressed (Figure 4). Discussion Visceral artery aneurysms are rare, and most are SAAs. ey are most commonly seen in patients in their 50s and 60s and show a female predominance [1,5]. e aneurysm is thought to form as a result of degeneration of the arterial wall and deterioration of its elastic structure. Atherosclerosis, autoimmune pathologies, hypertension, smoking, pancreatitis, portal hypertension, trauma, and collagen tissue diseases have been associated with the formation of aneurysms [1,4,6]. Eighty percent of SAA cases are asymptomatic and incidentally detected in radiological evaluations performed for di erent clinical conditions. Pain in the epigastric Case Reports in Surgery and left upper quadrant is the most common symptom [3]. Seventy-ve percent of ruptured aneurysms occur in pregnancies, and the mortality rate in cases of ruptured aneurysms is relatively high. e risk of rupture is higher in cases of pregnancy, pseudoaneurysms, aneurysms with a diameter greater than 2 cm, portal hypertension, and liver transplantations [1][2][3][4]. Although open surgery is accepted as the gold standard for SAAs, debate continues among clinicians about the ideal treatment. Perioperative morbidity and mortality rates are higher [7]. Recently, transcatheter embolization and endovascular stents have been increasingly used to treat SAAs. Given their high success rates and minimal invasiveness, these appear to be valid alternatives to open surgery. Laparoscopic surgery is widely used. However, it requires advanced skills and experience in vascular reconstruction [1,6,7]. In a meta-analysis, Hogendoorn et al. [2] found no signi cant di erence between open surgery and endovascular repair in terms of major complication rates. Moreover, in this meta-analysis, the short-term outcome of endovascular surgery was better than that in open surgery, but open surgery was superior in terms of long-term outcomes [2]. Excessive angulation of the celiac axis and tortuosity of the splenic artery may result in failure of endovascular procedures. Open surgery should be the preferred choice for ruptured aneurysms. A splenectomy is frequently performed in conventional surgery, and aneurysm resection is then conducted if there is no excess adhesiveness. In cases of aneurysms that are su ciently far away from the spleen hilum, the preservation of the spleen and its immune function should be a priority [1]. Open surgery using an anterior approach, a short gastric artery, and left gastroepiploic artery attachment increase the risk of splenic infarction. In such cases, a lateral retroperitoneal approach is recommended for the protection of collaterals [4,8]. For giant SAAs or cases where a simple aneurysmectomy is impossible due to dense strictures, the preferred treatment options include the following: an aneurysmectomy plus splenectomy; bipolar splenic artery ligation, with or without an aneurysmectomy; transaneurysmal splenic artery ligation; or distal pancreatectomy, if necessary [9]. In our case, an endovascular intervention was rst planned, but open surgery was subsequently performed due to the failure in cannulation of the celiac truncus, tortuosity in the splenic Case Reports in Surgery 3 artery, and accompanying celiac aneurysm. e presence of an arterial segment of suitable length in the spleen allows for reconstruction and splenic sparing surgery. In the present case, an anterior approach was preferred because the need for revascularization was predicted. Rupture of an SAA is a rare and life-threatening disorder. In symptomatic young women planning future pregnancies, treatment should take place without delay in cases of 2 cm deep aneurysms. Minimally invasive procedures should be the rst choice in the treatment of SAAs. In appropriate cases, preservation of the spleen and splenic artery reconstruction are possible in open surgery.
2018-04-03T01:12:58.840Z
2017-12-17T00:00:00.000
{ "year": 2017, "sha1": "4f9f4805079f0abcecd92f63064722813ef64fbe", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/cris/2017/8716962.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bdac2ee38bbfb2c9cb41ccbf670aae422a4deab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259089055
pes2o/s2orc
v3-fos-license
Exploring Hybrid Linguistic Features for Turkish Text Readability This paper presents the first comprehensive study on automatic readability assessment of Turkish texts. We combine state-of-the-art neural network models with linguistic features at lexical, morphosyntactic, syntactic and discourse levels to develop an advanced readability tool. We evaluate the effectiveness of traditional readability formulas compared to modern automated methods and identify key linguistic features that determine the readability of Turkish texts. Introduction Automatic Readability Assessment (ARA) is an important task in computational linguistics that aims to automatically determine the level of difficulty of understanding a written text, which has implications for various fields, such as healthcare, education, and accessibility (Vajjala, 2021).In the healthcare sector, medical practitioners can use ARA tools to ensure patient information and consent forms are easily understandable (Ley and Florio, 1996).In the field of education, teachers and learners alike can benefit from ARA systems to adapt materials to the appropriate language proficiency level (Kintsch and Vipond, 2014).The appropriate readability of technical reports and other business documents is critical to ensure that the intended audience can fully understand the content and can make informed decisions (Bushee et al., 2018).In areas such as cyber-security, readability is particularly important as it can impact response time to risk closures and case materials (Smit et al., 2021). The task of assessing readability presents challenges, particularly when dealing with large corpora of text.Manual extraction and calculation of linguistic features are time-consuming, expensive, and prone to human errors, leading to subjective labels (Deutsch et al., 2020).Recent research in the field has focused on developing automated methods for extracting linguistic predictors and training models for readability assessment. Despite these crucial applications and developments, the readability efforts in Turkish have largely been confined to traditional readability formulas, such as Flesch-Kincaid (Kincaid et al., 1975) and its adaptations (Ateşman, 1997;Bezirci and Yilmaz, 2010;Çetinkaya, 2010).Several previous studies have pointed out the shortcomings of these formulas (Feng et al., 2010(Feng et al., , 2009)).They typically rely on superficial text features such as sentence length and word length.The integration of complex morphological, syntactic, semantic, and discourse features in modern ARA approaches offers the possibility of significantly improving the current readability studies in Turkish.In this paper, we present the first ARA study for Turkish.Our study combines traditional raw text features with lexical, morpho-syntactic, and syntactic information to create an advanced readability assessment tool for Turkish.We demonstrate the effectiveness of our tool on a new corpus of Turkish popular science magazine articles, published for different age groups and educational levels.Our study aims to contribute to the development of automated tools for accessibility, educational research, and language learning in Turkish. The rest of the paper is organized as follows.In Section 2, we review related work on readability assessment and machine learning-based approaches.In Section 3, we describe our corpus and the linguistic features used in our study.In Section 4, we present the results of our experiments and analyze the effectiveness of our tool.Finally, in Section 5, we conclude our research and discuss future directions. Previous Work The research of quantifying text readability, or the ease with which a text can be read, has a history spanning over a century (DuBay, 2007).Initial research was centered on the creation of lists of difficult words and readability formulas such as Flesch Reading Ease (Flesch, 1948), Dale-Chall readability formula (Dale and Chall, 1948), Gunning FOG Index (Gunning, 1969) and SMOG (Mc Laughlin, 1969).These formulas are essentially simple weighted linear functions that utilize easily measurable variables such as word and sentence length, as well as the proportion of complex words within a text.Initially developed for the English language, the Flesch Reading Ease formula required recalibration for its application to Turkish, a task undertaken by Ateşman (1997).However, a significant obstacle in its adoption was Atesman's failure to disclose the statistical variables used in the recalibration process.This gap was later addressed in the work of Çetinkaya (2010), which also assigned appropriate grade levels, thus facilitating its practical use in the Turkish educational context.Not long after the adaptation, Bezirci and Yilmaz (2010) introduced an important refinement, akin to the approach taken in the SMOG formula.They posited that the impact of polysyllabic words on text complexity is distinct from the total number of syllables present in the text.Accordingly, they included the counts of polysyllabic words (those with 3-, 4-, and 5+ syllables).Sönmez (2003) encountered inconsistencies when applying the Gunning FOG Index to Turkish texts which led to the development of their adaptation.The limitations are mainly due to the subjective nature of the formula in identifying complex words and concepts, which contrasts with other formulas that use easier-to-identify criteria such as syllable counts. Readability assessment has found practical applications in several areas in Turkish, particularly in the fields of medicine and education.For instance, researchers have used the Flesch-Kincaid and Atesman readability formulae to assess the readability of anaesthesia consent forms in Turkish hospitals, which led to valuable insights into how these documents could be optimised for better comprehension (Boztas et al., 2017;Boztaş et al., 2014).In the realm of education, readability studies have been employed to evaluate the complexity of textbooks, thereby ensuring that these crucial learning materials are appropriate for the targeted student age group.For example, research has been conducted to determine the readability levels of Turkish tales in middle-school textbooks, providing insights that could potentially enhance the quality of education by aligning learning materials with students' comprehension abilities (Turkben, 2019;Tekşan et al., 2020;Guven, 2014).While traditional readability formulas have significantly contributed to the field of readability assessment, they are not without their limitations.They often rely heavily on surface-level text features, such as word and sentence length, and fail to account for deeper linguistic and cognitive factors that influence readability (Collins-Thompson, 2014). Readability formulae have inherent limitations that can affect their accuracy and applicability.Given the unique phonetic attributes, sentence formation patterns, and mean syllable length in each language, each language requires its own calibrated readability formula.The validity of studies employing readability formulae calibrated for the English language to evaluate texts in other languages remains questionable.In practice, applying an English-calibrated formula to Turkish texts may result in an overestimation of readability levels.Indeed, most studies that have used this approach have reported inflated readability requirements (Akgül, 2019;Akgül, 2022).Furthermore, the evolution of language over time may necessitate periodic recalibration of these formulas (Lee and Lee, 2023).As language trends evolve and new words and phrases become commonplace, readability formulas must adapt to remain accurate and relevant.The research indicates that traditional readability measures display unreliable performance when applied to non-traditional document types, such as web pages (Petersen and Ostendorf, 2009). Traditional readability formulas, despite their extensive use, have been criticised for their lack of wide linguistic coverage (Feng et al., 2009(Feng et al., , 2010)).These formulas predominantly focus on superficial text features, largely ignoring other linguistic aspects that significantly contribute to text readability.Factors such as syntactic and semantic complexity, discourse structure, and other linguistic branches recognised by (Collins-Thompson, 2014) which are integral to comprehending a text, remain largely unaccounted for in these traditional models.This narrow linguistic focus can lead to inaccuracies in readability assessment, especially when applied to languages or texts with diverse linguistic structures.These scores are relative measures of readability that should be interpreted in the context of the text's overall features and the target audience's reading ability.They are not absolute measures and treating them as such can result in a misunderstanding of the text's actual readability. Practitioner errors in applying readability formulas often stem from methodological shortcomings and misinterpretations (Wang et al., 2013).The requirement of traditional measures for considerable text sample sizes introduces another impediment, even though the theoretical minimum size for a text sample has yet to be conclusively established.A common methodological error is the inappropriate sampling of text.Some studies might only consider a limited section of a text, such as the first 100 words, leading to skewed results, especially in scientific texts where complexity often increases later in the document.Similarly, the selective assessment of text sections that do not accurately mirror the overall complexity of the text, like focusing solely on the introduction or conclusion, can misrepresent the readability level. In recent years, research in ARA has shifted from traditional linear models, which use simple metrics such as word and sentence length to estimate the reading level of a text, to fine-grained features (Collins-Thompson, 2014).These features often include machine learning models trained on a combination of word counts, lexical patterns, discourse analysis, morphology, and syntactic structures.There has been an emerging trend toward using neural models for ARA.These models have demonstrated the capacity to implicitly capture the previously mentioned features without the need for manual feature extraction (Jawahar et al., 2019).Martinc et al. (2021) and Imperial (2021) experimented with contextual embeddings of BERT (Devlin et al., 2018) for the readability assessment task, achieving par or better results than feature-based approaches.However, both studies omitted crossdomain evaluation, leading to uncertainty about the extent to which language models rely on topic and genre information, as opposed to readability.Other studies have further explored various strategies to integrate linguistic features with transformer models, promoting a fusion of traditional and neural approaches (Lee et al., 2021;Deutsch et al., 2020).The state-of-the-art results are currently being achieved by hybrid models that ensemble linguistic features with transformer-based models, highlighting the combined strength of traditional and modern approaches. Corpus Most widely used readability corpora include One Stop English (OSE) (Vajjala and Lučić, 2018), the WeeBit corpus (Vajjala and Meurers, 2012) and the Newsela corpus (Xu et al., 2015).While the majority of these benchmark datasets and corpora are predominantly available in English, there is a growing interest in the development of readability corpora in other languages.In the context of low-resource languages, limited access to digital text resources necessitates reliance on conventional learning materials, such as classroom materials and textbooks.There are currently no existing readability corpora available for Turkish. TUBITAK PopSci Magazine Readability Corpus Our corpus was constructed using popular science articles from TUBITAK Popular Science Magazines1 spanning the period 2007 to 2022.The articles are openly published and made available for non-commercial redistribution and research purposes.We selected 2250 articles from three magazines, each catering to readers of different age groups.These magazines include Meraklı Minik (for ages 0-6), Bilim Çocuk (for ages 7+), and Bilim ve Teknik (for ages 15+).Accordingly, we consider the articles from these magazines as elementary, intermediate, and advanced level reading material.Our corpus is non-parallel and encompasses a diverse range of topics, including instructions for laboratory experiments and brief articles about recent scientific discoveries.This characteristic is similar to that of the WeeBit corpus (Vajjala and Meurers, 2012), which also includes articles from various topics and resources.Given that the articles in our corpus are written by experts and specifically tailored for distinct age groups, it can be appropriately regarded as an 'expert-annotated' corpus.We used a off-the-shelf pdf-to-text converter to extract the relevant article text and manually corrected the articles to ensure the conversion accuracy of Turkish characters and the layout integrity.We also performed a preliminary analysis on the three reading levels of the corpus using traditional formulae and showed the results in Table 2, presenting readability metrics Atesman, Cetinkaya-Uzun and Type-Token Ratio (TTR).Atesman and Cetinkaya readability scores decrease from one level to the next indicating that texts become more complex at higher reading levels.In contrast, the TTR score increases suggesting that texts become more diverse and less repetitive at higher reading levels.It should also be noted that the readability levels of the elementary-level articles in both formulas were not suitable for the intended age group and that the magazine's disclaimer states that certain articles may require the assistance of an adult or parent. Linguistic Features In this study, we explore five subgroups of linguistic features from our Turkish readability corpus: traditional or surface-based features, syntactic features, lexico-semantic features, morphological features, and discourse features.We employ spaCy v3.4.0 (Honnibal et al., 2020) with the pre-trained tr_core_news_trf model 2 for the majority of general tasks, including entity recognition, POS tagging, and dependency parsing.We use the Stanford Stanza parser version 1.5.0 (Qi et al., 2020) for constituency parsing. Traditional Features (TRAD) Traditional or surface-based features are commonly used to predict the readability of Turkish texts, and 2 https://huggingface.co/turkish-nlp-suite/tr_ core_news_trf we also adopt them as a baseline for our study.Specifically, we extract 7 traditional features, including Turkish adaptations of well-known readability formulas such as Atesman and Cetinkaya-Uzun, as well as average values of words and syllables.As noted by (Bezirci and Yilmaz, 2010) in their evaluation of the Turkish readability formulae, the impact of the number of polysyllabic words on text complexity is different from that of the total number of syllables present in the text.Therefore, we also included the counts of polysyllabic words (3-, 4-, and 5+ syllables) as separate features in our analysis. Syntactic Features (SYNX) Syntactic properties have a significant impact on the overall complexity of a given text, which serves as an important indicator of readability.We extract an array of syntactic features that capture various dimensions of sentence structure. Phrasal and dependency type features: Reading abilities are related to the ratios involving clauses in a text (Lu, 2010).We extract features based on noun and verb phrases at sentence and article levels.We integrate features based on the unconditional probabilities of their dependencybased equivalents (Dell'Orletta et al., 2011).These encompass various types of syntactic dependencies, including subject, direct object, and modifier, among others. Parse tree depth features: The depth and structure of dependency trees in a text can reflect the level of sentence complexity.Following this principle, we extract the average and maximum depths of the constituency and dependency tree structures present in the text (Dell'Orletta et al., 2011). Part-of-Speech features: Part-of-speech (POS) tags provide essential information about the syntactic function of words in sentences.Adapting the work of Tonelli et al. (2012) and Lee et al. ( 2021), we include features based on universal POS tag counts.Such features offer insights into the distribution and usage of different word categories, adding another layer of syntactic information. Lexico-Semantic Features (LXSM) Lexico-semantic features are a set of linguistic attributes that can reveal the complexity of a text's vocabulary.These features can be used to identify specific words or phrases that may pose difficulty or unfamiliarity to readers (Collins-Thompson, 2014).Lexical Variation features: Secondary language acquisition research has found a correlation between the diversity of words within the same Part-Of-Speech (POS) category and the lexical richness of a text (Vajjala and Meurers, 2012).We extract noun, verb, adjective, and adverb variations, which represent the proportion of the respective category's words to the total. Reading Level Example Type Token Ratio (TTR) features: TTR is a commonly used metric to quantify lexical richness and has been widely employed in readability assessment studies.We compute five distinct variations of TTR from (Vajjala and Meurers, 2012).The standard TTR variations of a text sample are susceptible to the text length, which can introduce bias in the readability assessment.To address this limitation, we also consider the Moving-Average Type-Token Ratio (MATTR) (Covington and McFall, 2010).The MATTR mitigates the length-dependency issue by calculating the TTR score within a moving window across the text. Psycholinguistic features: We adopted word frequencies obtained from the Turkish psycholinguistic database created by Acar et al. (2016).This resource was built from transcriptions of children's speech and corpora of children's literature, thus containing words commonly acquired during early development.It also includes words typically acquired during adulthood from a standard corpus.We extracted the average word and sentence frequency for both early and late-acquired words.We calculate features based on the average log10 values similar to the SubtlexUS corpus (Brysbaert and New, 2009). Word Familiarity features: Familiarity with specific words can greatly affect readability.Based on prior work on Italian (Dell'Orletta et al., 2011) and French (François and Fairon, 2012) readability studies, we assessed the vocabulary composition of the articles using a reference list of 1700 basic words essential for achieving elementary reading proficiency in Turkish.This list, a combination of the first 1200 words taught to children aged 0-6 (Keklik, 2010) and a set of essential words from an open-access textbook3 for learning Turkish, provides a benchmark for vocabulary familiarity.We calculated the percentage of unique words (types) in the text based on this reference list, performed on a lemma basis. Morphological features (MORPH) Morphological complexity plays a significant role in readability assessment, particularly in languages that are morphologically richer than English such as German (Hancke et al., 2012) and Basque (Gonzalez-Dios et al., 2014).In our study, we integrate the Morphological Complexity Index (MCI) from Brezina and Pallotti (2019).The MCI captures the variability of morphological exponents of specific parts-of-speech within a text by comparing word forms with their stems.We calculate MCI features for verbs, nouns, and adjectives, considering different sample sizes and sampling techniques with and without repetition.MCI has been leveraged in cross-lingual readability assessment frameworks, proving its applicability across languages with varying morphological structures (Weiss et al., 2021).However, these studies have not explored agglutinative languages such as Turkish and Hungarian. Discourse features (DISCO) The final group of features we examine are entity density features.The presence and frequency of entities within a text can significantly impact the cognitive load required for comprehension.Entities often introduce new conceptual information, thereby increasing the burden on the reader's working memory.This relationship between entities and readability was previously shown by Feng et al. (2009Feng et al. ( , 2010)). Experiments We experiment with four different setups: tradbaseline (non-neural model with shallow features), modern-baseline (non-neural model with linguistic features), neural (pretrained transformer models), and hybrid (modern-baseline + neural).We use 10-fold cross-validation (10FCV) and evaluate our models using standard metrics such as accuracy, precision, recall, and macro F1-score.Specifically, we choose traditional learning algorithms such as Logistic Support Vector Machines, Random Forest and XGBoost as our baseline models.We perform a randomised search to explore a reasonable range of hyper-parameter values.We apply a grid search to identify the optimal combination of hyper-parameter values within this range. Non-Neural Models with Linguistic Features Given the lack of available baselines for the readability task in Turkish, our first objective is to establish a baseline for the readability task.This baseline (trad-baseline) is designed to be on par with traditional readability formulas and is reliant on shallow linguistic features such as sentence and word lengths.By establishing this baseline, we are effectively creating a benchmark that allows for meaningful comparison between the traditional readability formulas, which are the only available methods in readability assessment for Turkish.We expand our feature set and include a more diverse set of linguistic feature groups (modern-baseline). We are interested in the performance of individual features, but we also aim to identify the bestperforming combinations when these features are assembled into linguistic groups. Neural Models We extend from the established usage of transformer-based models in readability assessment (Deutsch et al., 2020;Martinc et al., 2021;Lee et al., 2021) and opt for the BERTurk model4 for our analysis.We tested multiple learning rates and batch sizes to ascertain the optimal configuration for our task.Specifically, we examined the learning rates of [1e-5, 2e-5, 3e-5, 1e-4] and the batch sizes of [8,16,32].Our final model used AdamW optimizer, linear scheduler with 10% warmup steps, batch size of 8, and learning rate of 3e-5.The sequence lengths of our input documents were all set to 512 tokens.We fine-tune our model for three epochs. Hybrid Model In our study, we experiment with a hybrid model approach that aims to leverage the strengths of both neural and non-neural models in an ensemble learning strategy.The premise behind the hybrid model is based on the observation that while neural models such as BERT have demonstrated robust performance across diverse tasks, they could still benefit from incorporating handcrafted linguistic features, which have been key components in traditional nonneural models (Deutsch et al., 2020) Through evaluation of four distinct models, namely Support Vector Machines (SVM), Random Forest (RandomF), Logistic Regression (LogR), and XGBoost, were assessed using the combination of five different linguistic groups: traditional (TRAD), lexico-semantic (LXSM), syntactic (SYNX), morphological (MORPH), and discourse (DISCO) features.Table 6 provides a comparative view of these models' performance when trained using the full combination.Among the four models evaluated, the Random Forest model delivered the highest performance with 85.3%.Importantly, all of the linguistic groups used provide orthogonal or distinct information.The varying levels of performance between different approaches is demonstrated in Table 6.The hybrid model, which combines the strengths of both traditional and neural methodologies, outperforms all other models, securing the highest values for accuracy, precision, recall, and F1 score.Following the hybrid model, the neural model performs the best.The neural model (BERT) demonstrates an enhanced ability to capture nuanced characteristics of text readability, exhibiting supe-rior performance to the baseline models without any handcrafted linguistic features.The modern baseline, incorporating five different linguistic subgroups, achieves superior performance compared to the traditional baseline.This highlights the advantage of leveraging an extended set of linguistic features over merely relying on surface-level features typical of traditional readability formulae. Model Interpretation In order to gain insights into the significance of individual linguistic features within our bestperforming model, the RF model, we utilised two well-established model interpretation techniques specifically designed for Random Forest models: Feature Permutation and Mean Decrease in Impurity (MDI) as shown in Figure 1 and 2. Feature Correlation We also considered model-independent analysis through Spearman correlation to gain additional perspective into the importance of features with respect to readability levels.Table 7 presents the ten features with the highest Spearman correlation coefficients highlighting the significance of readability assessment. Lingustic Features The analysis of feature importance consistently highlights the significant role of simple measures such as average sentence length and polysyllable counts.These findings align with previous research, where it has been shown that even compared to Furthermore, our analysis demonstrates that lexicosemantic features play a prominent role in determining readability.This is evident from the performance improvement observed when including LXSM linguistic feature set in the modern-baseline method.It indicates that while traditional features are indeed valuable, incorporating fine-grained information at the semantic and lexical level can lead to an even better understanding of overall readability.The consistent presence of the syntactic feature "mean tree depth" further supports the relationship between sentence length and syntactic complexity.The correlation between mean tree depth and mean sentence length suggests that the structural complexity captured by syntactic features aligns with the overall complexity of sentences. Conclusion We introduced a new readability corpus based on popular science magazine articles, providing a valuable resource for future research in Turkish readability assessment.By exploring the effectiveness of linguistic features at different levels, we have demonstrated their superiority over traditional readability formulae and shallow-level features.Our findings emphasise the importance of incorporating fine-grained linguistic features, as they provide more comprehensive insights into the complexity of Turkish texts.We showed the potential of hybrid models that combine fine-grained features with neural models by leveraging the strengths of both linguistic features and state-of-the-art transformers. Figure 1 : Figure 1: Feature importance by permutation on full model Figure 2 : Figure 2: Feature Importance importance by MDI on full model Table 1 : Table 1 displays descriptive statistics for the finalized corpus.As expected, the advanced texts display a greater average length compared to the elementary texts.However, the high standard deviation values for each level indicate that other factors beyond text Descriptive Corpus Statistics length may have a significant impact on determining the reading level of a given text. Table 3 : Example sentences for three reading levels Table 5 : Incremental contribution of each feature to the RandomF model Table 6 : Performance comparison of readability approaches Table 7 : Top ten features ranked by their Spearman correlation coefficients more complex feature extraction methods, a simple measure such as sentence length can indirectly capture multiple linguistic aspects of readability.
2023-06-07T01:16:10.879Z
2023-06-06T00:00:00.000
{ "year": 2023, "sha1": "2ca7cc4bc3138d8cbc99ca217136d429a4ad835f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2306.03774", "oa_status": "GREEN", "pdf_src": "ArXiv", "pdf_hash": "6ba6359a7ee4b0a4d3f729570175f6decd9ae8ec", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
240513715
pes2o/s2orc
v3-fos-license
Machine Learning Classification Techniques for Detecting the Impact of Human Resources Outcomes on Commercial Banks Performance The banking industry is a market with great competition and dynamism where organizational performance becomes paramount. Different indicators can be used to measure organizational performance and sustain competitive advantage in a global marketplace. The execution of the performance indicators is usually achieved through human resources, which stand as the core element in sustaining the organization in the highly competitive marketplace. It becomes essential to effectively manage human resources strategically and align its strategies with organizational strategies. We adopted a survey research design using a quantitative approach, distributing a structured questionnaire to 305 respondents utilizing efficient sampling techniques. The prediction of bank performance is very crucial since bad performance can result in serious problems for the bank and society, such as bankruptcy and negative influence on the country’s economy. Most researchers in the past adopted traditional statistics to build prediction models; however, due to the efficiency of machine learning algorithms, a lot of researchers now apply various machine learning algorithms to various fields, including performance prediction systems. In this study, eight different machine learning algorithms were employed to build performance models to predict the prospective performance of commercial banks in Nigeria based on human resources outcomes (employee skills, attitude, and behavior) through the Python software tool with machine learning libraries and packages. The results of the analysis clearly show that human resources outcomes are crucial in achieving organizational performance, and the models built from the eight machine learning classifier algorithms in this study predict the bank performance as superior with the accuracies of 74–81%. The feature importance was computed with the package in Scikit-learn to show comparative importance or contribution of each feature in the prediction, and employee attitude is rated far more than other features. Nigeria’s bank industry should focus more on employee attitude so that the performance can be improved to outstanding class from the current superior class. Introduction Today's business environment is highly competitive and changing rapidly in terms of globalization and technology innovations, and it becomes imperative to develop the internal potential by paying adequate attention to people management and the workforce that enables the systems to operate. Hence, human resources management has been considered vital in obtaining a sustainable competitive advantage in the face of globalization and advances in technology [1][2][3][4][5][6]. ere are drastic changes in the banking sector due to the application of electronic banking technologies, where financial institutions are now making use of the World Wide Web and other appropriate technologies and software to process applications for various products at a minimum time and cost [7,8]. is has greatly improved bank operations and services across the globe. e traditional data analysis techniques are unable to cope with large volumes of data, and there is an increase in the huge volume of data in day-to-day operations from internal and external sources. e traditional data analysis does not effectively harness the increased processing power. Predictable variables independence limits their application in the real world, and these methods may be theoretically invalid for the finite sample which can pose a problem in applying it to the performance prediction because the multivariate-normality assumption for the independent variable is frequently violated in financial datasets [9][10][11]. In a financial institution, data are the assets. erefore, the value of the data can be evaluated when the organization can extract the valuable knowledge hidden in the raw data. Data mining involves the extraction of interesting patterns from raw data using statistical and machine learning techniques [7]. Data mining techniques can be used to build an excellent predictive model and visualize its report into meaningful information [7]. e data mining techniques and tools can predict the future trend and behavior of a system and discover the previous unknown patterns [7][8][9]. Previous works on the impact of human resources on organizational performance were analyzed using descriptive and inferential statistics. Antwi et al. adopted descriptive statistics and multivariate regression for analyzing data [3]. Delery and Gupta adopted descriptive statistics and hierarchical regression analysis to determine the influence of human resources on organizational performance [4]. Atiku et al. used both descriptive and inferential statistics to determine the influence of human resource outcomes on bank performance [6]. Descriptive statistics was extensively used to analyze data [1,[12][13][14]. In this study, we adopted data mining technology in detecting and predicting the influence of human resources outcomes-employee skill, attitude, and behavior on bank performance. By adopting data mining technology, an organization can access the right information timely from the huge volume of raw data. Data mining techniques can discover hidden patterns that may not be discovered by traditional data analysis. To the best of our knowledge, one of the recent studies carried out a survey on the relationship between organizational capabilities using big data predictive analytics while achieving superior organizational performance. e result of their findings shows that financial institutions need to adopt green and flexible technologies to achieve higher operational performance which enhances overall profit. Moreover, few studies have been conducted on the application of data mining to the banking sector in the areas such as customer retention, automatic credit approval, fraud detection, marketing, and risk management [7-10, 15, 16], but none on the prediction of bank performance concerning human resources outcomes using the data mining approach. e remaining part of this study is organized as follows: Section 2 shows the methodology used in this study. Section 3 presents the experiments conducted and the results of the experiments. In Section 4, conclusions are drawn with the recommendation for future research. Materials and Methods e CRISP-DM model was implemented in this work. e model has been widely used by many researchers in the last decade [16][17][18][19]. CRISP-DM is a nonpropriety, freely available, and cross-industry standard for data mining projects. It is a cyclic method comprising of six phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment as depicted in Figure 1 [20]. Python was used as a software tool for implementing this work. It is open-source software that can offer a wide range of data mining classification techniques. At the business understanding phase, a critical and extensive review of the literature was carried out to study the existing problems in the bank that have been handled by the data mining approach. e study employed a survey approach to investigate the impact of human resource outcomes on organizational performance. Multiple methods were adopted for gathering the data via the use of a structured questionnaire, personal interviews, observations, and other documented evidence (annual reports/statements of accounts for three consecutive years) [21]. e business problem was first identified which is the growing need of the commercial banks in Nigeria to know that human resource outcomes (employee skills, attitude, and behavior) have a great impact on the performance of the bank using a data mining approach to extract all hidden facts and make a prediction. e performance of the bank is predicted to be able to approach the competitive market campaigns exactly those human resource outcomes that indicate good performance of the organization. e identified problem was transformed into a data mining task by classifying organization performance into five categories which are outstanding, superior, good, average, and poor (Table 1), and by analyzing the available employee data with the data mining techniques that are selected for the classification. is is considered a supervised learning task since the classification models are built from the data that have a known target variable. At the data understanding phase, the employee data were studied to understand the types of data collected from the bank employees that are stored in an electronic database. e rules and procedures for the collection and storage of the bank employee data were also reviewed. e details are provided in the study by Atiku [21]. At the data preprocessing phase, employee data from the employee database were extracted and organized. is contains data for 305 employees, described by 19 parameters such as sex, marital status, department, educational qualification, work experience, organizational culture, organizational learning culture, organizational performance, employee attitudes, and so on [21]. e available data were transformed, and some of the parameters were removed. Only the parameter (organization performance ( Table 2) and human resources outcomes (Table 3)) that are important for the research were processed. e challenge in this data mining project is essential to predict bank performance based on human resource outcomes. Organization performance is the selected target variable to be learned by the data mining algorithm. A categorical target variable is designed using five values (categories) in Table 1. e bank performance can be outstanding, superior, good, average, and poor. For each of the six performances given In recent years, the change in competitive advantage relative to the largest competitor has markedly improved 2 In recent years, the change in market share relative to the largest competitor has markedly improved 3 In recent years, the change in profit relative to the largest competitor has markedly improved 4 In recent years, change in cost (product or services) relative to the largest competitor has reduced 5 In recent years, the change in sales revenue relative to the largest competitor has greatly increased 6 In recent years, the change in customer satisfaction relative to the largest competitor has greatly increased My work gives me a feeling of personal accomplishment 6 My job makes good use of my skills and abilities 7 How satisfied are you with your involvement in the decisions that affect your work? 8 How satisfied are you with the opportunity to get a better job at this company? 9 How satisfied are you with the information you receive from management regarding what is going on in this company? 10 How satisfied are you with the training you received for your present job? 11 How satisfied are you with your physical working conditions? 12 How satisfied are you with your involvement in the decisions that affect your work? in Table 2, the respondent scores (1-4) were used [21]. Table 1 depicts the scaled performance rating which was performed and adjusted to suit banking performance based on the Fitch recovery rating [22]. One of the three nationally acceptable ratings by the US security and exchange commission in 1975 is the Fitch Rating Inc. Since the classification technique label must be categorical, scaling is important. At the modeling phase, the data mining model is built by classifying bank performance into five categories as given in Table 1. In this work, we employed several classification algorithms that have the potential to yield good results including decision tree, logistic regression, nearest neighbor algorithm, random forest, gradient boosting, support vector machine, ensemble [23][24][25], and deep learning. Scikit-learn-Python package for machine learning was used to design the experiments of the proposed models. e selected classification algorithms used for this data mining project were applied to the final dataset consisting of 305 bank employee records with 4 attributes. For implementation, the 4 attributes and descriptions are depicted in Table 4. e results of the work are given in Section 4. Experiments, Results, and Discussion e main objective of this study is to detect the possibility of predicting the class (output) variable with the input variables that are retained in the bank performance model. In our experiments, we used the most common classification techniques from Scikit-learn. ese classification techniques are described in Section 3.1 of this study. e percentage split approach was used for the dataset to divide it into training and test sets (70% for the training dataset and 30% for the test dataset). e performance of the classification models was measured in our experiments on the input features using a confusion matrix from Scikit-learn to evaluate records that are correctly or incorrectly predicted by the classifiers. A confusion matrix is an N × N table that summarizes how I provide constructive feedback that helps a coworker 20 I keep coworkers informed of the progress of his/her work in group projects 21 Questions inefficient ways of doing things in his/her workgroup 22 Introduces new ways of doing things in his/her workgroup 23 Suggests improvements to increase his/her work group's efficiency 24 An employee does everything in his/her power to satisfy the customer, even when there are problems 25 Makes suggestions to improve the products and/or services offered to customers 26 Projects a positive image of the organization to customers 27 e supervisor helps you by doing things that are not part of his/her regular duties 28 e supervisor keeps you informed of important events which concern you 29 e supervisor suggests ways towards improving the work group's performance 30 e supervisor advises you on ways to improve your management practices 31 Ability to attract the best employees Employee skills EmpSkills 32 Ability to retain essential employees 33 Cooperation between management and other employees 34 Cooperation among employees in general 35 Motivation among employees in general 36 Quality consciousness among employees in general 37 Spending per employee 38 Absence rate 39 Turnover rate 40 Job satisfaction 41 Organizational commitment 42 Customer complaints successful a classification model's predictions were, that is, the correlation between the label and the model's classification. One axis of a confusion matrix is the label that the model predicted and the other axis is the actual label. N represents the number of classes. In a binary classification problem, N � 2. For instance, Figure 2 is a sample confusion matrix for a binary classification problem. In this work, as depicted in Table 1, there are 5 output classes for bank performance (outstanding, superior, good, average, and poor). A confusion matrix is a table with 4 different combinations of predicted and actual values as illustrated in Figure 2. e quantities, true positive (TP), false positive (FP), true negative (TN), and false negative (FN) are associated with the confusion matrix. e confusion matrix is exceedingly useful for measuring recall, precision, specificity, and accuracy as expressed in equations (1)- (6). In this study, the following metrics (equations (1)- (6)) were used to measure the performance of the classifiers. (i) Sensitivity/true positive rate/recall is given as Sensitivity represents the percentage of the positive class that is correctly classified. (ii) False negative rate is given as False negative rate (FNR) represents the percentage of the positive class that is incorrectly classified by the classifier. (iii) Specificity/true negative rate is given as Specificity represents the percentage of the negative class that is correctly classified. (iv) False positive rate is given as FPR represents the percentage of the negative class that is incorrectly classified by the classifier. (v) Accuracy (ACC) is given as ACC is the percentage of the total features that are correct. (vi) F-score (F1) or F-measure is given as F1 is a weighted harmonic average of precision and recall. Sensitivity and specificity are the most important out of the metrics shown above; they show the proportions that are correctly classified either as positive or negative. Sensitivity, specificity, accuracy, and F-score (F1) among other metrics were used for assessing the classifiers considered in this work. Two types of errors, namely, type 1 and type 2 do occur while evaluating the performance of machine learning classifiers: Type 1 error: false positive (FP). is type of error occurs because the predicted value was falsely predicted. e actual value was negative, but the model predicted a positive value (equation (4)). Type 2 error: false negative (FN). e predicted value was falsely predicted. e actual value was positive, but the model predicted a negative value (equation (2)). e visualization of the impact of the three features of human resources outcome (employee attitudes, behavior, and skills) is shown in Figures 3(a)-3(c), respectively. e three selected features show a greater percentage for superior performance. e bank performance is generally superior on the input features. Machine Learning Algorithms. Machine learning (ML) is a branch of artificial intelligence for building models/systems that can learn from data. ML is the process of teaching a computer system how to make accurate predictions when fed with data. ML algorithms are the engines of machine learning (i.e., it is the algorithm that turn datasets into models). ese algorithms consist of a target/outcome variable (or dependent variable) which is to be predicted from a given set of predictors (independent variables). A function that maps inputs to desired outputs is created by using the set of variables. e procedure as shown in Figure 4 continues until the model accomplishes an anticipated level of accuracy on the training dataset. e ML algorithms described in the subsections 3.1.1-3.1.8 were used for classification in this study. K-Nearest Neighbors Classifier. e K-nearest neighbors classifier (K-NN) algorithm is arguably the simplest machine learning algorithm. Building the model consists only of storing the training dataset. To predict a new data point, the algorithm finds the closest data points in the training dataset -its "nearest neighbors." e K-NN algorithm is for both classification and regression but is largely used for classification. It is a supervised learning algorithm that deems various centroids and utilizes the Euclidean function for distance comparison. It analyses results and classifies each point to the group with all closest points. New instances are classified using a majority vote of k of its neighbors, and the instance is allocated to a class that is most common among its K-nearest neighbors [19]. e basic steps for K-NN are listed as follows and depicted in Figure 5. Basic steps for K-NN are as follows. (i) Calculate distance (ii) Find closest neighbors (iii) Vote for labels Applied Computational Intelligence and Soft Computing In this study, we build a classification model using the K-NN algorithm. Figure 6 depicts the result of the experiments on training and test datasets with the accuracies of 74% and 75%, respectively. Accuracy of the K-NN classifier on the training set is 0.74. e accuracy of the K-NN classifier on the test set is 0.75. Predict by choosing K-NN to model with k � 100, after seeing that a value of k � 100 is a pretty good number of neighbors for this model. is was used to fit the model for the entire dataset instead of just the training set. An example of an out-of-sample observation of prediction in this work is given as follows: knn.predict ([[88, 44, 50]]), where 88, 44, 50 is the sample employee' attitude, behavior, and skills respectively; and the output is array(("Superior"), dtype � object). e bank performance is predicted to be "Superior," i.e., it shows a superior performance class for the three input variables. We evaluate the performance of the K-NN classifier by computing the confusion matrix shown in Figure 7. e K-NN algorithm classified the bank performance as "Superior class." e superior class has the greatest proportion of precision, recall, F1-score, and support (Figure 7). e "support" in Figure 7 is the number of true response samples in a class. Logistic Regression. Logistic regression is appropriate for binary classification, although called regression. It is essentially a classification algorithm that fits data into a logistic function. It is an exceptional machine learning algorithm that estimates discrete values such as 0/1, yes/no, and true/false based on a given set of independent variables and uses a logistic function to predict the likelihood of an event, of which its output is between 0 and 1 as shown in Figure 8. In our experiment with training and test datasets, we obtained accuracies of 74.6% and 76.1%, respectively. Training set' accuracy is 0.746. e test set' accuracy is 0.761. Experimenting with the entire dataset, we obtain an accuracy of 75% as shown in Figure 9, confusion matrix, and precision computation. e result also confirms that the bank performance is classified as "superior" with an accuracy of 75%. Decision Tree Classifier. e decision tree algorithm works on both categorical and continuous dependent variables, even though usually used for classification. Models built with a decision tree algorithm take an instance and (Figure 10(a)). Basis steps in decision tree are as follows: (i) Select the best attribute using attribute selection measures (ASM) to split the records (ii) Make that attribute a decision node and breaks the dataset into smaller subsets (iii) Starts tree building by repeating this process recursively for each child until one of the conditions will match: (i) All the tuples belong to the same attribute value (ii) ere are no more remaining attributes (iii) ere are no more instances e decision tree generation (Figure 10(b)) illustrates how decision tree models are built and tested from a dataset that is being split into training and test data. e training data is used for building models while the test data is for evaluating the model. e performance evaluation of the built model is performed by using some metrics (Equations (1)-(6)). Figure 11 depicts the performance evaluation results of the decision tree classifier of this work. e bank's performance was classified as superior with a precision of 77%. Our experiment with the decision tree algorithm depicts "superior" bank performance with the accuracy of 77.9% and 71.7% for the training and test data, respectively. e Applied Computational Intelligence and Soft Computing accuracy on the training set is 0.779. e accuracy on the test set is 0.717. Random Forest. A random forest is an ensemble of decision trees that function by building a multitude of decision trees during training time and present the class with the mode classification of the individual trees. e four steps to the random forest are as follow and are also depicted in Figure 12. Steps are as follows: (i) Select random samples from a given dataset (ii) Construct a decision tree for each sample and get a prediction result from each decision tree (iii) Perform a vote for each predicted result (iv) Select the prediction result with the most votes as the final prediction In our experiments, a random forest consisting of 100 trees was applied to the bank dataset, and we obtained 75.1% and 75.0% accuracies, respectively, on the training and test datasets. We compute the performance evaluation of the random forest classifier using the confusion matrix and obtained the results as shown in Figure 13. e classifier predicted the bank performance as superior with a precision of 75%. Gradient Boosting Classifier. Gradient boosting is a machine learning boosting that minimizes the overall prediction errors by combining the previous models to emerge the next possible best model. e target outcome depends on how much changing that prediction influences the overall prediction error. e adaptive boosting method combined with weighted minimization, after which the classifiers and weighted inputs are recalculated is being referred to as gradient boosting classifiers ( Figure 14). Gradient boosting classifiers aim to minimize the loss or the difference between the actual class value of the training example and the predicted class value. It operates similar to gradient descent in a neural network. In our experiment on gradient boosting classifier on the bank dataset, we obtained the accuracy on the training set as 0.817 (81.7%) and on the test set as 0.750 (75.0%). For the performance evaluation of the classifier by computing the confusion matrix, we arrived at the results shown in Figure 15. e classifier predicts the superior class of bank performance with the 81% precision. 3.1.6. Support Vector Machine. Support vector machine (SVM) is a classification algorithm that plots a line that divides diverse groupings of data, and a vector is computed to optimize the line to ascertain that the closest point of each group is far away from each other as shown in Figure 16. In our experiment with SVM, we obtained the accuracies of 75.6% and 75.0% on the training and test datasets, respectively. e accuracy on the training set is 0.756. e accuracy on the test set is 0.750. e bank performance is classified as superior with a precision of 74% as illustrated in Figure 17. Deep Learning. Deep learning is a subset of machine learning that involves systems that think and learn like humans using artificial neural networks. It mimics the workings of the human brain to process data (detecting objects, classifying objects, and recognizing speech) and creates patterns necessary for decision making. As illustrated in Figure 18, the algorithm builds models by making use of hidden elements in the input division to extract features, group objects, and discover useful data patterns during the training process which occurs at multiple levels/layers. e computational model is made up of multiple layers, known as neural networks, where data are processed. In this study, a deep learning algorithm was applied to the training and test datasets and obtained an accuracy of 74.2% and 75.0%, respectively. e accuracy on the training set is 0.742. e accuracy on the test set is 0.750. e bank performance was classified as superior with a precision of 74% as depicted in Figure 19. Bayesian Classifier (Naive Bayes Model). e Naive Bayes is a classification algorithm that is based on the concept of the Bayes theorem, which is one of the fundamental theorems in probability. e Bayes theorem creates the resilience of the Naive Bayes algorithm. Data are classified using conditional probability given in the following equation. erefore, the Naive Bayes algorithm essentially provides the probability of a record that belongs to a class, given the values of the features. is is called conditional probability. where P(A|B) is the conditional probability of A given B; P(B|A) is the conditional probability of B given A; P(A) is the probability of event A; P(B) is the probability of event B. In our experiments with Naïve Bayes, we obtained the accuracy of 73% and 80% on training and test datasets, respectively. e accuracy on the training set is 0.732. e accuracy on the test set is 0.739. Feature Importance. e decision tree was used to evaluate the feature importance of the three features. Feature importance rates how important each feature is for the decision a tree makes. It is a number between 0 and 1 for each feature, where 0 means "not used at all" and 1 means "perfectly predicts the target." e feature importance always sums to 1. Scikit-learn gives additional variables with the model, which illustrates the comparative importance or contribution of each feature in the prediction. It automatically computes the relevance score of each feature in the training phase and scales the significance down, so that the sum of all scores is 1. is score is crucial as it allows you to choose the most important features and drop the least important ones for model building. In this work, the feature importance value for the evaluated features is (0.73502277, 0.09399781, and 0.17097942) for employee attitude, employee behavior, and employee skills, respectively. Figure 21 illustrates that feature "employee attitude" is by far the most 12 Applied Computational Intelligence and Soft Computing Applied Computational Intelligence and Soft Computing 13 important feature than employee behavior and skills. Employee behavior feature can be dropped as it is less important for the bank performance model building. e focus of the banking industry should be on feature-employee attitude and try to improve on it, as this can take the bank performance from superior class to the topmost class which is "outstanding." Table 5 illustrates the results of the eight classifiers' accuracies on split data (training and test datasets); the accuracy of the training set is similar or closer to the accuracy of the test set. is is an indication that the models built through the classifiers are efficient, there is no overfitting, and the models generalize well to new data. Table 6 depicts the performance of the selected algorithms evaluated using the confusion matrix in Section 3.1. Table 6 illustrates how successful the classification model's predictions are, showing the crucial metrics (precision, recall, and F1-scores) for superior class because the results of all algorithms show that the bank performance is superior. Gradient boosting algorithm has the best accuracy performance as depicted in Table 6. It has the best precision of 0.81 and an F1-score of 0.88, and its recall score of 0.96 is very close to 1.0. e recall should preferably be 1 (high) for a good classifier. Recall becomes 1 only when the numerator and denominator are equal, i.e., TP � TP + FN; this also means FN is zero (no false negative); therefore, K-nearest neighbor, logistic regression, random forest, support vector machine, and deep learning algorithms produce no false negatives. Moreover, the rest of the algorithms-decision tree, gradient boosting, and Naïve Bayes also have recall scores close to 1.0, i.e., 0.98, 0.96, and 0.91 respectively; hence, the three algorithms have very minimum false negatives. Generally, all the algorithms performed excellently having no or little false negatives. A model that produces no false positives has a precision of 1.0, high precision relates to the low false positive rate. We obtained precisions between 0.74 and 081 for all the pretty good classifier algorithms. Furthermore, ideally in a good classifier, both precision and recall should be 1.00, which also means FP and FN are 0.00 (no false positives and negatives). Hence, we need a metric that considers both precision and recall, and an F1-score is that metric that considers both precision and recall. F1-score is the weighted average of precision and recall. A good F1score means that you have low false positives and low false negatives. An F1-score is considered perfect when it is 1.00, while the model is a total failure when it is 0.00. In our experiments, we obtained very high F1-scores between 0.84 and 0.88, very close to 1.00 for all the classifier algorithms (Table 6). erefore, the performance results of the algorithms with very good precision, recall, and F1-scores are an indication that the models built from the algorithms are efficient. All the models classify the bank performance as superior. Conclusion In this study, we employed a survey of a quantitative structured questionnaire method using efficient sampling techniques to select 305 respondents. Eight (8) different machine learning algorithms were employed to build performance models to predict the prospective performance of the Nigeria Commercial Bank Industry based on human resources outcomes (employee skills, attitude, and behavior) using Scikit-learn packages for data analytics. e results of the analysis clearly show that human resources outcomes are crucial in achieving organizational performance. e machine learning models predict the bank performance as superior with the accuracies of 74-81%. e superior class has a "support" size of 227, i.e., the number of the true response samples out of the overall samples of 305. To measure the performance of the classifier algorithms, a confusion matrix was used, and the good results of the metrics such as precision, recall, and F1-scores clearly show that the models are efficient. Out of the eight classifier models, gradient boosting has the best performance result. e feature importance was evaluated using the feature importance package in Scikit-learn to show the comparative importance or contribution of each feature in the prediction. Out of the three features (employee attitude, employee behavior, and employee skills), employee attitude is rated far more than others. Nigeria's bank industry needs to pay more attention to the employee attitude and improve more on it, so that the performance can be taken to the outstanding class from the current superior class. Further work will investigate other factors and features that can also improve the performance of the commercial bank [27]. Data Availability e data used to support the findings of this study are available upon request from the corresponding author. Conflicts of Interest e authors declare that they have no conflicts of interest.
2021-10-19T16:27:38.200Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "121b634f4fbcd522dc5ada798e2a13472ea62648", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/acisc/2021/7747907.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3a5f1010cd67d2f82d85b42f211e3024500418ed", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
265781933
pes2o/s2orc
v3-fos-license
Automated Generation of Site Map Graphs for Robot-Inclusive Tactile Marker Placement Navigation is a primary requirement for mobile robots as they encounter hazards during operation. To improve safety for robotic deployments, hazard alert systems could be integrated into their spatial environments to reduce robotic computational load and hardware requirements, while providing benefits to visually-impaired human users in the area. Modified tactile paving markers known as the Passive Auto-Tactile Heuristic (PATH) tiles are configured for robotic usage to aid in their navigation and hazard avoidance along with a specialized Tactile Sensor Module, while providing tactile cues to visually-impaired human users. This paper proposes a novel algorithm for generating standardized layouts of ‘safe routes’ from site plans of the robot’s workspace as input. The creation of ‘safe routes’ is implemented by extracting site boundaries and outlines of interior obstacles of an input site plan, generating of the site’s Medial Axis (MA) through its connectivity graph and overlaying the corresponding PATH tile types. This implementation is explored using the Rhinoceros3D CAD program, Grasshopper development platform and associated plugins for processing the site plan and visualization of the eventual routes. The algorithm is tested on 5 sites of varying spatial functions for validation. I. INTRODUCTION Mobile service robots are being deployed for increasing types of menial and repetitive tasks as technology improves. They are seen as a potential supplement to manpower shortages for dull, dirty and dangerous jobs [1].These robots aid in multiple tasks such as cleaning, area inspections, transports of goods and navigation of unknown environments, among other resource-intensive work [2], [3].However, most environments have not been built with their operation in mind, leading to damage to robot and properties, maintenance down-times and reduced productivity for the tasks they were made to supplement the lack of human labour in.Public spaces such as hotel lobbies, hospitals, libraries and some The associate editor coordinating the review of this manuscript and approving it for publication was Chih-Yu Hsu . offices contain multiple spatial hazards that lead to robotic issues of collision damage, subpar navigation or incomplete tasks [4], reducing the potential of using robotic aid to mitigate monotonous jobs and dangerous maintenance tasks. Mobile service robots encounter multiple hazards during operation, both from the robot's end and how the robot interacts with its environment.Deficiencies on the types of sensors used, or lack of hardware or software functions can cause the robot to fail, mainly by collisions or power outages [5].Other negative repercussions the robot can arise during their deployment within their workplaces, such as restricted access due to haphazard layouts that require human intervention, or collision with objects that the robot does not have the sensors to detect for.Problem resolution often comes under the responsibility of the robot operator to reposition or restart the robot as a means to troubleshoot robot failures, and the operators should be trained to pre-empt these types of robot failures as part of their job [6]. A. BENEFITS FOR BOTH HUMANS AND ROBOTS THROUGH IMPLEMENTATION OF ROBOT-INCLUSIVE TACTILE PAVING A large proportion of mobile service robots implement visual equipment for them to detect and avoid obstacles in their environment during operation.However, robots working in dynamic and ever-changing environments may face issues in deciphering inputs with visual clutter / noise, or areas that are crowded with moving people or traffic.Thus, crowded areas fail to benefit from having robotic deployments to perform the repetitive and drudging tasks such as cleaning or safety inspections.This paper proposes the implementation of a modified form of tactile paving which the VIB people use for navigation and wayfinding as an alternate means of hazard alert system for robots. There is thus potential in utilizing tactile means for alerting robots to its surrounding hazards [7], [8].The benefits of using tactile inputs for alerting robots on spatial hazards in their surroundings would mainly be: 1) offloading an extent of computational power from the robot from visual data processing onto spatial infrastructure, and 2) the potential of implementing mobile service robotic aid in crowded areas, or in zones with erratic or extreme lux values [9]. On the end of benefiting the VIB human users, there have been efforts to provide architectural changes for the VIB people based on their conditions and requirements for navigating and hazard detection cues during travel [10], [11].Through the implementation of such robot-inclusive tactile paving, these architectural changes could allow VIB people to access these public spaces using these tactile cues for increased range of accessible spaces and perhaps improve their quality of life, while also enabling robotic hazard detection.Some of the existing methods that VIB people rely on for navigation such as smart glasses [12] which fully limited vision users are unable to utilize.Some VIB users have guide dogs and their robotic variants [13], [14], [15], but those require additional logistics to accommodate them.Other methods utilize their own hearing augmented with additional equipment [16], [17], which compromises their sense of hearing for conversation or other audio cues.Intelligent floors require substantial logistics and extensive embedding of electronic equipment that require active maintenance and resources [18].Utilizing modified tactile paving would be a form of passive infrastructural modification where the active sensing is on the mobile robots or the visuallyimpaired users' white canes.These factors provide basis for relying on the VIB users' white cane as the preferred mode of navigation. To mitigate such issues for robotic hazard detection and navigation, we can thus consider the use of tactile markers as an alternate hazard alert system.The existing tactile paving designs could be modified to include robotic detection and usage.These modified tactile markers could be then used to passively denote 'safe routes' within the robot's work environment, similar to the existing tactile paving that the Visually-Impaired or Blind (VIB) people utilize as cues to find a safe path around environmental obstacles without relying on memory or excessive electronic equipment [19], [20], [21].This would reduce the complexity on the robot's end by modifying the environment to preemptively warn the robots of environmental obstacles, instead of utilizing computationally expensive visual detection methods to detect objects outside of their view. B. BACKGROUND INFORMATION FOR THE EXISTING TACTILE PAVING FOR THE VISUALLY-IMPAIRED The existing tactile paving alert system for VIB users was created by Seiichi Miyake in 1965 and implemented in Okayama, Japan in 1967 [22].Standards for the design and implementation of the tactile paving are usually referenced from the ISO/FDIS 23599 [23] and CEN/TS 15209 [24].Morevover, countries such as Singapore [25], [26], New Zealand (NZ) [27], United Kingdom (UK) [28], the United States [29] and Japan [30] have their own customized national standards for tactile paving [31], [32].Without a global standard for tactile paving layouts, each country's implementation of tactile paving is different in layout and significance, and is prone to errors or oversights [33].There is currently no precedent against modifications to tactile paving patterns.Examples of common tactile tile designs is seen in Fig. 1.A summary of the various tactile paving types and function is shown in Table .1, based on information gathered from [31] and [34].With the above requirements in mind, a suitable route generation method is researched to determine suitable robot-inclusive tactile paving placements in public zones.This would enable VIB users and robots to utilize this type of spatial modification for navigation and hazard detection in the subsequent section II-A.This modified tactile paving is not meant to supersede other forms of existing sensor methods.Instead, this system would provide the architectural elements that mobile robots could use with the relevant tactile sensors.This would extend robotic hazard detection of its work environment in their tactile capabilities.Implementing modified tactile paving for robots in public spatial infrastructure would also require a proper method in their placement in the environment to avoid sending wrong signals or damage to users of the zone.This paper thus seeks to: 1) Explain the rationale for integrating robot-inclusive modified tactile paving in spaces for robot deployments, 2) Provide a basis for arrangement/configuring the modified tactile alert system within the given zone, 3) Application of the arrangement algorithm on various site plans for validation. C. ROBOT-INCLUSIVITY PRINCIPLES FOR MODIFIED TACTILE PAVING Robot designers are to take note of the robustness, pricing, and sensor performance and system integration to improve the robots' productivity when implementing multiple sensor arrays on mobile robots.The typical way of resolving operational problems often involve improving the robot's equipment and programs to enable the robot to explore and detect its surroundings better.Examples of such improvements include deployments of advanced control methods [35], complex hardware enhancements [36], reconfigurable mechanisms [37], and algorithmic improvements [38], [39].The robot eventually becomes more complex and costly to surpass environmental challenges during operation.The 'Design for Robot' (DfR) methodology would help mitigate such issues during robot deployments, by modifying the environment for better robot productivity.Using the Universal Design (UD) guidelines [40] as a basis, a set of DfR principles were created for producing more favourable working zones for robots to passively enhance their tasking outputs [41], [42].These outputs can be quantified using metrics of percentage for area covered, or time taken for waypoint travel routines.Existing approaches were looked at by Ivanov et al. [43], where they discussed design suggestions for mobile service robots like waiter robots and drones being deployed in hospitality premises such as hotels and restaurants.There are other examples of changing the architectural elements to enhance robotic productivity, mostly in the areas of robotic cleaning of floors [41], vertical garden robotic maintenance [44], false ceiling maintenance [45] and drain inspection [46] respectively. A simplified version of the overall DfR methodology can be seen in Fig. 2. The two main approaches of DfR utilize the deductive and inductive processes.The deductive approach determines the robot's constraints uses the provided floor plans and drawings of its work environment to evaluate the robot's productivity in the zone before deploying the actual robot.The inductive approach uses physical trial runs of the robot onsite, and obtaining the insights from the results and operational feedback.Design recommendations for both the robot and the environment can then be generated using the DfR robot-inclusive principles, which are namely safety, activity, manipulability, observability and accessibility [47].For this paper, it would focus more on the deductive stage of the DfR process.The 5 robot-inclusive principles are based on ensuring protection for robots, environment and the other users of the zone during operation of the robots.The Safety principle aims to cut down on hazards found in the robot's work zone and reduce incidents of robot or environmental damage during deployments.The Observability principle determines how the object layout can be considered and made more distinct for robotic navigation and hazard detection.The Activity principle strives to implement a cohesive system whereby a collaborative work environment can be achieved for both robot and human users simultaneously.Accessibility principle aims to generate a barrier-free environment for the robot to perform tasks in a given spatial zone.The Manipulability principle focuses on the methods that objects which the robots interact with during their operation can be handled more readily.Listed guidelines for each section varies for different spatial environments, and are to be updated as technology and spatial requirements evolve and adapt over time.These guidelines enabled the development of a robot-inclusive tactile paving system as an alternative hazard alert system for mobile service robots. For the purpose of hazard detection, the modified tactile paving would focus on the Observability and Accessibility DfR principles.Under the Observability DfR principle, methods of making the environmental hazards more apparent for robots by changing qualities in their spatial aspects are considered.Some examples include, but are not limited to, addition of opaque markers onto glass panels, or utilizing Quick-Response (QR) codes as location markers for robotic usage.In this case, the modified robot inclusive tactile paving would provide robots with environmental cues on the 'safe route' within their work zone, or enable the robot to detect the hazard cues pre-emptively. Moreover, the implementation of a modified robot inclusive tactile paving system is aligned with the Accessibility DfR principle to allow the mobile robots to access a higher proportion of the workspace and circumvent the workspace hazards easier.Implementing this system of updated tactile cues would also lead architects to enable their buildings to be more inclusive to the VIB users, and provide a more holistic usage of the architectural space. The main information to encode upon the tactile pavings would be: 1) hazard type, and 2) distance from hazard.The tactile pavings should be made modular for easier installation and potential for combination with different tiles to convey varied hazard data types.The existing tactile paving tile system implemented by the VIB human users can be used as reference for a potential robot-inclusive tactile paving system known as Passive Auto-Tactile Heuristic (PATH) tiles, expanded in section II-B. Section II describes the rationale and criteria for generating obstacle-free routes from site plans, the tactile paving hazard alert system known as the Passive Auto-Tactile Heuristic (PATH) tiles, its corresponding tactile sensor module and the algorithm for placement of these PATH tiles within a built environment with a provided site plan.Section III provides and discusses the results of the algorithm conducted on five existing architectural floor plans.The paper is then concluded in Section IV. A. GENERATION OF OBSTACLE-FREE ROUTES FROM PROVIDED SITE PLANS For navigation tasks, it would be ideal for the route to be direct and compacted for faster completion of intended goals, and less energy wastage.This could be applied to mobile robotic tasks of checkpoint tracking and exploration.An idealized route for robots to follow would be one that allows them to: • avoid all obstacles in the environment that would cause collisions or damage, • be pre-emptively informed of hazards in its path • reach a majority/all points of the accessible zone (if the robot is to perform area coverage tasks).By providing the shortest access route through the space as the expedient route, the mobile robots would then be able to split off from this shortest route of the zone to navigate and access their required zones.This would involve finding the main spline that avoids all obstacles and walls of a room, and maintaining the connectivity among the rooms for this route of expediency.This type of route exists by taking the center line among all walls and objects present known as the MA [48].This MA, also known as a 'topological skeleton', allows for preservation of a space's connectivity, linear dimensions, and its topology.This would determine a path within a site plan that avoids all obstacles and walls present, by collapsing the spatial boundaries into its topological MA.The robot's size can be overlaid afterwards to check if it can fit within the space as part of its accessible zone. In cases where the site plan layouts lead to difficulties in generating a MA, approximations of its MA could be used instead.It is especially useful for sites with irregular or selfintersecting zones.It is fortunate that these cases are limited with physical limitations of building designs, with preference for straight walls, and radial circulation from a common entry point.Multiple algorithms exist to obtain the MA of a given shape.For our case, the site plan can be abstracted into its walls and interior furniture, to generate a bounded shape to perform MA operations upon. MAT works by applying a distance transform on the shape.MAT calculates the distance from each point within the shape to that of its closest point on the boundary.The points which are locally maximal would lie upon the approximated MA.The MAT can be optimized using other types of transform such as the Euclidean distance transform algorithms [55], [56]. SS approximates the shape's MA by iteratively shrinking the shape boundary inwards, while maintaining the boundary's topology.The eventual collapsed line traces out the shape's 'skeleton' and is the approximation of the shape's MA.Other similar skeletonization algorithms work in a similar way of iteratively reducing the shape's boundary inwards [55], [57] without compromising on the shape's connectivity and topology and reduce a shape into its core skeleton lines to obtain the MA approximation. VD is conducted by splitting the shape's boundary curve in intervals to obtain the points for Voronoi cell generation [58].Only the Voronoi edges that are strictly within the shape are considered for obtaining the shape's MA.The selected Voronoi edges are joined together to create the shape's MA.The MA' degree of refinement can be adjusted by changing the intervals by which the boundary curve of the shape is split.The smaller the intervals between the points on the boundary curve, the more refined the eventual boundary curve would be.The resulting line is then smoothened to obtain the shape's MA [59], [60]. These path generation methods can be hybridized or combined with other algorithms to mitigate their individual drawbacks.However, limitations for mobile robots navigating a complex environment within public spaces still remain due to erroneous environmental inputs, or constraints in robotic hardware.Other constraints that mobile robots encounter during operation mainly involve complexities of real-time navigation, level of complexity of its work environment, accuracy in discerning obstacles in its environment, its own capabilities and energy efficiency, hardware limitations and other safety regulations [61].These operating conditions would also affect the choice of path planning methods used by the mobile robots for their tasks [62], [63].These conditions are also dependent on whether the robot has a pre-loaded map in its software for navigation, and whether the robot requires dynamic obstacle detection during its work. B. PASSIVE AUTO-TACTILE HEURISTIC (PATH) TILES The Passive Auto-Tactile Heuristic (PATH) tiles [64] are a catalogue of modified tactile paving tiles designed for conveying basic environmental hazard data to robots with a dedicated tactile sensor.The implementation of PATH tiles reflects the DfR methodology of improving robotic productivity through the use of robot-inclusive architectural design changes, rather than solely improving the robots' functionalities [41], [44], [65].This proposed infrastructural system of PATH tiles is a novel non-intrusive architectural integration, aiming to improve mobile robots' ability in detecting and reacting to environmental hazards in their deployment sites.Implementing this modified tactile system could also provide an impetus for correcting installation errors in existing tactile paving layouts. The PATH tiles are categorized into two types, one for hazard type (hazard PATH tiles), the other for conveying distance from PATH tiles to an obstacle (distance PATH tiles) seen in Fig. 3 for differentiated responses to different hazard placements [64].This divergence between hazard and distance PATH tile designs enables customization between the various safety distances and hazard types based on the object layouts for different areas.For the distance PATH tiles, it presumes that the robot takes its current location on the PATH tile as the corresponding distance from the hazard. A PATH tile is sized 300mm by 300mm, and has reflectional symmetry in the axis of travel.Counterparts to existing tactile indicators have been designed, as seen in the bump design of the guidance tile, moving obstacle, and vertical level change PATH tiles, to be analogous to the existing guidance tile, blister tile, and offset blister tile designs respectively.Truncated blisters would indicate upcoming hazards, while raised bars or stripes aligned to the direction of travel are used to indicate safe routes on which to travel in the updated tactile paving system to reduce the learning curve for human VIB users.The blister PATH tiles would be located near the hazards to alert users in advance, while the guidance PATH tiles would be located on a route that minimizes obstruction and lead the users to places or objects of interest within the zone.For the blister and guidance tiles, they can be arranged in a similar manner to convey the same type of information on which path to follow, being analogues of existing tactile paving.Implementation of the newly modified PATH tile layout would be shown in Section III of this paper.As tactile paving layout implementation and regulations differ among countries, there would be a need to create an algorithm for the placement for PATH tiles to preemptively avoid similar problems, especially since current mobile service robots have less leeway in dynamic decision-making during deployment.The algorithm would take in the inputs of the mapped site, and the robot's detection range based on its sensor array to provide a PATH layout for future robots to avoid hazards and travel to its required way-points more efficiently.This consistency of signals conveyed would aid VIB users in interpreting the updated tactile paving.This was reflected in the design choice where truncated blisters are still being used to mainly indicate hazards, and raised bars or stripes are being used to indicate paths of travel in the updated tactile paving system [29].The PATH tile placement algorithm is explained in Section II-D. C. TACTILE SENSOR MODULE In order to enable mobile robots to detect and ascertain tactile cues from the PATH tiles, a dedicated tactile sensor module (TSM) is required.This TSM would function similar to the comb of a music box for surveying tactile cues.[64].In Yeo et.al. [64], a TSM as seen in Fig. 4 was created using multiple contact limit switches arranged on a linear frame, and to be mounted on a mobile robot for PATH tile detection.A tuned Chebyshev Graph Neural Network (CGNN) model [66], [67] was implemented for parsing tactile cues from the PATH tiles for interpreting them for avoiding the hazards ahead.However, that paper only focused on the TSM and CGNN implementation rather than the method of placement within public zones. D. GRAPHING OF SITE PLAN It is ideal to generate the PATH tile layout arrangements through automation rather than manual methods.However, rules for arranging furniture to improve robotic navigation and wayfinding safety are still needed for a proper and robot-inclusive configuration for the given zone.By considering the robot's range of detection by its sensors, we can determine the effective detection and movement field for it to capture data from its environment within a specified period 138068 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. of its run.To this end, graphing the site into an abstract form would help reduce complexity for the robot to parse and travel through the site.The site can be converted into a graph that shows the connections between multiple points of interest within the site, a network of path possibilities [68].Two zones are connected topologically if they share a common wall or doorway, whereby it is described as 'mutually adjacent' and 'mutually accessible' respectively in Dawes et.al [69].Graphing a site into its simplified diagram allows for easier computation of an eventual optimal path methodology of robots, that could utilize the methods described in [70] for determining passages or routes in a site with multiple location nodes.This algorithm is created for the purpose of laying out robot-inclusive PATH tiles for hazard alerts for both visually-impaired/limited vision human users and robots in a standardized manner. The use of isovist graphs would aid in separating the space for centroid node identification [71].Isovist graphs are 2-dimensional planar representations of the visible area viewed from a singular view point within a space, limited by occluding boundaries and view limits from the current position.The terminology of isovist graphs is seen in Fig. 5. The eventual graph for our scenario would take the cumulative isovists and room separation lines to obtain a form of spatial separation between different zones within the site for further analysis and graphing, to place location nodes and determine the placement of the PATH tiles as a form of hazard alert system for mobile robots.It is assumed that if the robot can travel between 2 points without interruption or obstacles (e.g.furniture or walls), the entire space can be collapsed into a single location node at its weighted centroid for the simplified graph for the given site plan. The summarized method of graphing an input site plan would be: 1) Splitting the overall site into its constituent rooms 2) Assigning graph nodes to the centroids of each discrete zone, 3) Connecting the nodes together to generate a connectivity graph, and obtaining refined graph and determining connectivity for each node using wall/obstacle boundary outlines for MA generation, 4) Determine standardized placement and layout of PATH tiles for the specific site plan. These steps are further explained in the subsequent subsections II-D1 to II-D4. 1) DETERMINING CONSTITUENT ROOMS FOR GIVEN SITE For step 1, the overall zone is to split to its constituent rooms if it spans an area with doors and openings.The robot's view is modelled as a point source initially positioned at the entrance of the site.The use of isovist graphs would aid in separating the space for node identification [71], [72], [73], by consideration of the room boundaries and obstacle bounding boxes as occluding bounds to the robot's sensor view.Isovist graphs are beneficial in providing a logic in creating convex spaces on which further analysis can be conducted in terms of visual connectivity and distance to other points based on line-of-sight connections [74]. Open zones that are much larger than the robot's footprint would be assigned a node at the zone's weighted centroid.If obstacles are present within the zone, the unobstructed area would be subdivided into quadrilaterals wherever able to make it simpler for the centroids of the subdivided spaces to be obtained.Objects are determined to be obstacles if they occlude the robot's view ahead (perceptional obstacles), or if they are physical obstructions (navigational obstacles). The robot's view is modelled as a point source initially positioned at the entrance of the site.The use of isovist graphs would aid in separating the space for node identification [71], [72], [73], by consideration of the room boundaries and obstacle bounding boxes as occluding bounds to the robot's sensor view.Isovist graphs are beneficial in providing a logic in creating convex spaces on which further analysis can be conducted in terms of visual connectivity and distance to other points based on line-of-sight connections [74]. To obtain the centroid of a given discrete zone, we would model robot's Point of View (PoV) as 360 • point source to conduct isovist operation.Discrete or unconnected obstacles would have a bounding box to remove irregular outlines and for easier processing of the site plan.Radial lines from the initial robot's view are extended until it encounters the room corners or an obstacle's bounding box.The centroid for each discrete zone is obtained by finding the PoV point where its isovist ray lengths to the room corners or obstacles are made equal to form convex polygons.Thus, for given room corners C i , for i = 1 to n, we are to find the point P(x p , y p ) where all distances between P and the corners are equal, as given in (1).Here, (X C i , Y C i ) are the coordinates of the i th corner. Unconnected obstacles would also split the zone into multiple quadrilaterals and their corresponding centroid nodes.An example can be seen in Fig. 6.FIGURE 6. Process for obtaining discrete zones and their centroids; 1a) For a given room boundary of 4 corners (C1-C4) indicated by dotted red line, model robot's point of view (PoV) as 360deg point source to conduct isovist operation, 1b) Assume isovist ray length = infinity for rays to hit room corners / obstacles, 1c) Move robot's modelled PoV point such that isovist ray length are equal for isovist rays that connect to room corners to obtain discrete zone's centroid (in this case P3); 2) Multiple discrete zones within single room will be generated due to presence of obstacle (modelled here as grey rectangle), or interior walls. Separated rooms and occluded edges are then marked out using dotted site separation lines.In this way, a site can be split into multiple zones, each specific to the view of the robot and how it would view or assess the space with its sensor array, limited by its detection range and field of view.Occluded zones are split from the main space using dashed lines.Corridors and narrowed spaces would have their own separation lines. A discrete zone is defined as the largest non-occluded isovist perimeter within a room boundary.The weighted node of each discrete zone would be located at the centroid of its isovist rays, or simply the center of a discrete zone if the room itself has no obstacles within it.This process can be scaled up for site plans with multiple rooms and corridors.Corridors or passageways that connect multiple rooms would also be tagged as a separate 'room', shown in Fig. 7.For our scenario, obstacle bounding boxes would be added onto the input map even if they are out of the robot's planar isovist view but still present as a navigational obstacle. 2) NUMBERING NODES BY PROXIMITY AND CONNECTIVITY For step 2, the individual rooms would then be numbered by their degree of connectivity from the ingress point of the site.This connectivity degree is determined by its proximity from the ingress point, and how many distinct rooms a robot has to pass through from entering the site in order to reach the room of interest.An example is seen in Fig. 8, with example site's ingress point located on the left.With reference to Fig. 8, the rooms are labelled based on the number of steps needed for the radial connectivity isovist (RCI) to terminate (room 2a compared to room 2b; rooms 4a and 4b compared to room 4c), and by their proximity to ingress point (room 4a versus room 4b).Graph vertices for the connectivity graph would mainly be the Euclidean distance between the nodes to its neighbouring nodes in other rooms. 3) DETERMINING PATH TILE PLACEMENT For step 3, after the zone has been sub-divided using the rules and guidelines for segregating the zone detailed in the previous section, we obtain a site's connectivity graph.This connectivity graph can then be used to assign PATH tiles at the crucial locations where the robot might have problems determining the next actions to take. The generated MA is thus dependent on the resolution and regularity of the initial boundary curve, which can be mitigated by pre-processing the input site boundary curves to make it more regular and defined wherever able.Disconnected or un-linked branch lines may be generated due to the boundary curve irregularities or artifacts introduced during the Voronoi cell generation and their subsequent intersection lines.Post-processing steps of branch pruning, or skeleton simplification may be needed to cull the unwanted vertices and branch lines, and to touch up the obtained MA curves. For complicated shapes (or site plans in this scenario), the zone could be sub-divided beforehand to split up the zones for easier VD computation and MA generation.Complex or irregularly shaped zones may lose some detail when generating their MA using VD.The polyline curves generated by VD could give sub-optimal results from the presence of narrow passageways/corridors, or site plans that have multiple self-intersections that could cause similar self-intersections in the generated MA that have to be resolved manually in the post-processing step. These methods of MAT, SS and VD each have their advantages and drawbacks, mainly in terms of the types of shapes the algorithm can process, computational efficiency and the required level of accuracy for the generated MA.However, VD would be preferred here for creating the MA for our case as it can handle shapes with concavities, and ability to preserve topological features of the original shape.VD would also provide information on the shape's connectivity and ease of access from other zones within the same shape. For this paper, Blister PATH tiles would be placed at main entry points and path bifurcations.Guidance PATH tiles would be placed along the vertices of the MA lines between the nodes.This would provide an initial layout of the medial axes on which further designs of PATH tiles could be added upon.If multiple robots are being deployed within the same area, their positions within the site could be modelled as a Voronoi seed to see how it would affect the generated route for the other robots as well. 4) IMPLEMENTATION OF PLACEMENT ALGORITHM USING RHINO3D GRASSHOPPER PLUGIN Step 4 of the alogrithm involves the placement of the PATH tiles described in Section II-B.To test the placement algorithm, images of 2D site plan were input into Rhinoceros 3D CAD program (Rhino3D) using the Grasshopper plugin for visualization and image processing to generate the node placement and planned PATH tile placement.The benefit of using these programs is for the ease of importing various file formats for site plan images or their CAD files, along with access to parametric adjustment of the values used for the algorithm.Sub-plugins of 'Manhattan', 'Clipper', 'Topologizer' and 'Pufferfish' components are also required within Grasshopper libraries for the algorithm to work.At this stage, the code would implement the blister and guidance PATH tile patterns upon the input site plans. The initial block of Grasshopper code is used for image processing.It passes the extracted data to the next block for extracting the centroids of each discrete zone, which are then connected using Manhattan distance metrics.The output vertices are then tagged with the corresponding hazard PATH tiles, and the relevant distance PATH tiles are added afterwards, to fit with local architectural regulations. Each component of the Grasshopper code represents a function applied on the preceding inputs.The entire process is split into 4 main blocks of code.The various stages of the algorithm are documented in Fig. 9 while examples of the executed code is shown in Section III.Two methods are used for extracting the wall boundaries, and hazard outlines in algorithm: 1) extracting from image, or 2) extracting from a traced input polyline in the Rhino3D software.The first method is reliant on the image quality, whereas the second method requires additional time to trace out the outlines.However, using the second method would reduce post-processing work required on the extracted site boundaries and the resulting VD centerlines. The first method for extracting wall and spatial hazard boundary data takes in a monochrome site plan and extracts the spatial features of the walls and interior furniture of the site (Step 1a of Fig. 9, the feature extraction stage using monochrome image).The walls and main obstacles are extracted from the input site plan by using a monochrome colour filter, with walls and obstacles being coloured in black, and area permitted for robotic access in white.The data noise is filtered out using a bounded-size filter to remove unwanted lines or small spots that would arise from error signals of the object placements.The eventual lines and borders are then joined to form boundary curves for further processing. The second method (Step 1b of Fig. 9, feature extraction stage using traced polyline) extracts the room boundaries using the boundary curve output after the site plan was processed, along with obtaining the footprint of any obstacles in the robot's path.A Rhino polyline traced from the site plan and the interior furniture footprints could also be used as input as well, which would then remove the step of boundary line processing filtering step.Outlines for the structural pillars that protrude from the walls may be omitted in this stage to minimize additional artifacts created during the VD centerline / MA generation stage. The subsequent block of code processed the accessible area to generate two outputs from the site plan, 1) room centroids and 2) MA of the site.This would generate the path with least obstacles for the robot to traverse/ cover.The boundary curve of the accessible area would then be split for Voronoi else if Line is aligned to main spline then 9: Join at ends or generated segment intersections to form continuous trimmed line end if 29: end for cell generation to obtain the the center line of the traversable space, as seen in Fig. 10.The area considered for Voronoi cell generation is also an analogue for the bounded isovists that is clear of local obstacles.The room centroids are determined by subdividing the accessible zone by the doorways or corridor entrances/exits, and obtaining the centroid of the subdivided area.This points would then serve as the nodes for the graph of the site plan, as well as the center point of the Voronoi cell generation in each room to obtain a loop route for increased area coverage within the room, as compared to a linear route. 138072 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.The last block of the code implements the guidance and blister PATH tiles over the different points of the generated route, based on their location on the planned routes, with blister PATH tiles located at the endpoints of the generated route, and guidance PATH tiles at regular intervals along the lines of the generated routes.Overlapping tile patterns were culled or further processed to provide a cohesive tile layout within the chosen site plans. In summary, the method of graphing a site plan in pseudocode form is given in Algorithm 1. III. RESULTS AND DISCUSSION A. RESULTS site plans of existing buildings in various locations were input into the Grasshopper code for generating the 138074 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.'safe route' layouts.These sites were chosen for their varied functions, and to see how the tile layouts would be implemented on the different sites based on the building's footprint and interior layouts.Site 1 is a site plan depicting a retail zone -the basement level of GRID shopping mall located in Singapore [75], Site 2 represents a office site -FINE Design Group [76], Site 3 shows one for an industrial warehouse design [77], Site 4 shows the ground level floor plan from the Smart New World Innovation Center, a mixed use building [78], and Site 5 depicts a section of an airport, Level 2 of Singapore's Changi Airport Terminal 4 [79].The referenced site plans are seen in Fig. 12.The generated MA routes are seen in Fig. 13, while their respective finalized PATH layouts are seen in Fig. 14. The results show successful implementation of the algorithm on the 5 test sites of varying complexity, scale and function.PATH tile layouts were generated for all 5 sites as functional tile layouts i.e. the layouts do not intersect the existing walls or obstacles, and provide continuous paths from entrance points to other areas of interest within the buildings which the mobile service robots can now use. B. DISCUSSION The algorithm worked on generating the PATH tile layouts for all 5 existing sites.However, their individual codes required customized fine-tuning due to stray lines generated in the VD creation stage, or boundary issues that resulted in tiles intersecting the wall lines.These were solved by culling the tile curves that were located in the invalid areas.Future works would include applying the remaining PATH tile patterns on their corresponding hazards on the site plans, along with implementing PATH tile layouts on a test site in real life to test the efficacy of the PATH tiles for mobile robot deployment safety.As no precedents has been found for this application and use cases, the automating of the tile layout process would help architects and civil engineers to visualize and generate the routes for robots to avoid the spatial obstacles in their work environment, and provide further feedback on better installation of the robot-inclusive tactile marker placements. FIGURE 1 . FIGURE 1. Existing types of tactile paving tiles used by the visually-impaired or blind (VIB) people. FIGURE 2 . FIGURE 2. Design for Robot (DfR) methodology detailing the deductive and inductive stages and its involvement in creating robot inclusive spaces. FIGURE 7 . FIGURE 7. Generated centroids of discrete zones, corridors would be tagged with their own centroids. FIGURE 8 . FIGURE 8. Rooms numbered based on their proximity and connection to preceding rooms. FIGURE 9 . FIGURE 9. Summary of PATH implementation method on input floorplans. FIGURE 10 . FIGURE 10. a) Obtaining center lines of accessible zones after post-processing VD lines from sample site plan (generated lines coloured in red, while remaining center lines are coloured in green); b) Diagram of Grasshopper console and partial code. FIGURE 11 . FIGURE 11.Types of MA generated: a) fishbone; b) inflated fishbone (if central node included); c) warped fishbone (obstacles at side of room); d) warped inflated fishbone (if obstacles are present at side and middle of room); generated paths in green. Algorithm 1 4 : Summary of Site Graphing Process Require: Site plan 1: Extract interior boundaries of walls of site plan 2: Extract boundaries of objects within site 3: for i = 1 : No_of _boundaries do Segmenting boundary[i] for Voronoi Diagram (VD) generation 5:Create VD cell using segment endpoints as the center 6:if Line intersects/contacts with boundary[i] OR selfintersect then else if Point located along line length then 23: Align guidance PATH tile pattern along line direction 24: Overlay guidance PATH tile pattern centered on points 25: if guidance pattern overlap/intersect then FIGURE 12 . FIGURE 12.Reference sites used for validation -Site (a): Retail zone -the basement level of GRID shopping mall located in Singapore[75], Site (b): office -FINE Design Group[76], Site (c) -Industrial warehouse design[77], Site (d): Mixed use building -Smart New World Innovation Center, a mixed use building[78], and Site (e): Airport -Level 2 of Singapore's Changi Airport Terminal 4[79], images not to scale to each other. FIGURE 13 . FIGURE 13.Medial Axis (MA) routes generated for sites of varying functions: a) retail; b) office; c) industrial, d) mixed use building, e) transport hub -airport; generated routes in green, images are not to scale relative to each other. fishbone (if central node included / obstacle at room center) 3) warped fishbone (obstacles at side of room) 4) warped inflated fishbone (obstacles at side and middle of room) FIGURE 14 . FIGURE 14. Results -Final PATH routes generated for sites of varying functions: a) retail; b) office; c) industrial; d) mixed use; 5) transport hub; generated guidance tile layouts in light yellow, blister tiles in orange, images are not to scale relative to each other. TABLE 1 . Summary of existing tactile paving types. : Valid and trimmed MA is generated 14: for MA[i] do
2023-12-07T16:14:10.983Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "f8df986ce88b190bd209177ea0f45fcfd4aef2f6", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10345574.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "926f5e6e588689cc279002e68ca82e2dacaa43bd", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
212726190
pes2o/s2orc
v3-fos-license
Wigner-Smith Time Delay Matrix for Electromagnetics: Theory and Phenomenology Wigner-Smith (WS) time delay concepts have been used extensively in quantum mechanics to characterize delays experienced by particles interacting with a potential well. This paper formally extends WS time delay theory to Maxwell's equations and explores its potential applications in electromagnetics. The WS time delay matrix relates a lossless and reciprocal system's scattering matrix to its frequency derivative and allows for the construction of modes that experience well-defined group delays when interacting with the system. The matrix' entries for guiding, scattering, and radiating systems are energy-like overlap integrals of the electric and/or magnetic fields that arise upon excitation of the system via its ports. The WS time delay matrix has numerous applications in electromagnetics, including the characterization of group delays in multiport systems, the description of electromagnetic fields in terms of elementary scattering processes, and the characterization of frequency sensitivities of fields and multiport antenna impedance matrices. I. INTRODUCTION In 1960, Felix Smith published a seminal paper Lifetime Matrix in Collision Theory, a description of procedures to characterize the time delays experienced by particles during quantum mechanical interactions [1]. Starting from the Schrödinger equation, Smith showed that the matrix where S is a potential wells scattering matrix and ω denotes angular frequency, fully characterizes the particles' average time of residence in the system. Over the past 60 years, the Wigner-Smith (WS) time delay matrix Q (as it has come to be known) has found many applications in quantum mechanics, including the study of particle tunneling through potential barriers [2]- [3], the characterization of photoionization and photoemission time delays [4]- [5], and the analysis of decaying quantum systems [6]. For an excellent review of the field, see [7]. References to Smiths paper in the electromagnetics literature have been few and far in between, however. An exception is [8], where group delays of fields interacting with a twoport waveguide were characterized in terms of their WS dwell times. In optics and photonics, WS time delay concepts have been used to describe wave propagation in multimode U. R. Patel and E. Michielssen are with the Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109 USA e-mail: urpatel@umich.edu and emichiel@umich.edu. fibers [9], to optimize light storage in highly scattering environments [10], to shape the flow of light in disordered media [11], and to characterize optical fields passing through complex cavities [12]- [13]. Another notable line of work involves the statistical characterization of chaotic fields in large enclosures and reverberation chambers by exploiting connections between WS time delays and random matrix theory [14]- [17]. This paper outlines a WS time delay theory for electromagnetics. Its contributions are threefold. • First, it reviews WS theory from a systems perspective, using equation (1) to elucidate Qs central role in characterizing group delays in lossless and reciprocal electromagnetic systems. It also describes the so-called WS modes that arise upon diagonalization of Q and experience well-defined group delays when interacting with a system. • Second, it introduces closed-form expressions for the entries of the electromagnetic WS time delay matrix for guiding, scattering, and radiating systems. Indeed, Qs defining equation notwithstanding, its computation may proceed without knowledge of ∂S/∂ω. For guiding systems (e.g. closed multiport waveguide networks) excited by Transverse Electromagnetic (TEM) waves, the elements of the WS time delay matrix (1) can be expressed in terms of volume integrals of energy(-like) densities involving the electric and magnetic fields that arise upon excitation of the systems ports. For guiding systems with non-TEM excitations, scattering systems (e.g. perfect electrically conducting surfaces excited by impinging waves) or radiating systems (e.g. antennas and arrays thereof), additional correction terms and renormalization procedures are called for to obtain (1). • Third, it elucidates some important characteristics of WS modes and demonstrates the potential use of (1) in the broadband characterization of antenna systems. Specifically, it shows that WS modes naturally untangle resonant, corner/edge, and ballistic scattering phenomena as they are characterized by different dwell times within a system. It also demonstrates that knowledge of Q and S allows for the computation of ∂S/∂ω, which in turn can be used to assess the frequency dependence of impedance matrices of multiport systems. The theory and methods presented in this paper therefore can be viewed as multiport extensions of procedures for characterizing the bandwidth, quality factor, and stored energy of single-port antennas, see [18]- [23]. Throughout this paper, a time dependence e jωt with ω = 2πf is assumed. Additionally, † , T , * , and represent adjoint, transpose, complex conjugate, and angular frequency derivative (d/dω) operations, respectively. A. Group Delay in a One-Port System Consider the linear, time-invariant, lossless, and reciprocal one-port system shown in Fig. 1, with M = 1. Assume that the line supports the time-harmonic incoming signal where w is the distance away from the port and β(ω) represents the line's propagation constant. The system generates the outgoing signal where S(ω) is the scattering coefficient. Because the system is lossless, |S(ω)| = 1, i.e. S(ω) = e −jγ(ω) . γ(ω)'s frequency dependence is key to describing the system's response to transient excitations. Indeed, assume that the line supports the incoming narrowband pulse E i (w, t) with center frequency ω o , bandwidth 2∆ω, and real envelope A(t) =´∆ ω −∆ω A(ω)e jωt dt given by where use was made of β(ω) The system generates the outgoing pulse The outgoing pulse's group velocity and group delay are ∂ω/∂β(ω) and evaluated for ω = ω o , respectively [24]. B. Group Delay in a Multi-Port System The above scenario is easily generalized to the linear, timeinvariant, lossless, and reciprocal M -port system in Fig. 1, where all lines are assumed identical. Assume that line p supports the time-harmonic incoming signal On line 1 ≤ m ≤ M the system generates the outgoing signal The M × M scattering matrix S is unitary and symmetric, i.e. where I M is the M × M identity matrix. Next, assume that port p is excited by the incoming narrowband pulse Using S mp (ω) = |S mp (ω)| e −jγmp(ω) , the outgoing pulse on line m is Given the narrowband and smooth nature of A(t), the second term in (11) can be neglected. It follows that the outgoing signal's group delay on line m due to an incoming signal on line p is evaluated for ω = ω o . To generalize (6) to multiport systems, Smith put forward the time delay matrix defined in (1). To interpret Q, he introduced the weighted average time delay experienced by E i p (w, t) as it makes its way from input port p to all output ports: The weighting coefficient |S mp (ω)| 2 is the fraction of the power carried by E i p (w, t) transferred to port m. Inserting (12) into (13) yields where use was made of (9a) and (9b). Equation (14) implies that the diagonal elements of Q are the average time delays experienced by wave packets entering the system. Q's offdiagonal elements have no direct physical interpretation but are important in the transformations described below. C. Simultaneous Diagonalization of Q and S, and WS Modes Important insights into the temporal delays imposed by the system can be obtained by diagonalizing Q and S. It immediately follows from (1) that the WS time delay matrix is Hermitian: Here, use was made of S † S = −S † S , which follows from the frequency derivative of (9a). Q therefore can be diagonalized as where W is a unitary matrix whose columns are Q's eigenvectors, henceforth termed "WS modes", and Q is a diagonal matrix holding Q's real eigenvalues. Without loss of generality, it is assumed that all eigenvalues are distinct and ordered It is easily shown that W also factorizes S as where S is diagonal. Indeed, substituting (17) and (16) into (1) yields Equation (9b) implies (S ) T = S . It therefore follows from (18) that SQ = QS. Since S commutes with a diagonal matrix with distinct elements, it must be diagonal [25]. The same result can be obtained using time reversal arguments. in the space of vectors w orthogonal to Span (w 1 , w 2 , . . . , w q−1 ), and that Q qq = R (w q ). Alternatively, ∂R(w)/∂w = 0 for each w q , and R(w)'s critical values are the WS time delays Q qq . The WS modes therefore extremize the amount of time that a pulse dwells in the system. Specifically, the first (last) incoming WS mode represents a combination of excitations that results in an outgoing pulse experiencing the smallest (largest) possible time delay while interacting with the system. Next, let W ≡ W(ω o ) for a fixed frequency ω o , and define Q(ω) and S(ω) as Note that Q(ω) = Q(ω) and S(ω) = I M when ω = ω o . Substituting (20a)-(20b) into (1) yields Exciting the system with the time-harmonic incoming WS mode results in the outgoing WS mode A narrowband outgoing pulse built from WS mode q therefore exhibits group delay Q qq w.r.t. its incoming counterpart, uniformly across all lines. Finally, it is noted that if S(ω o ) and Q(ω o ) are known, (1) and (21) yield These estimates of S(ω o + δω) and S(ω o + δω) in turn can be used to approximate the system's response at ω o + δω. For example, consider the incoming signal obtained by evolving the frequency of the E i p (w, ω o ) in the WS modes in (23), while keeping their combination constants fixed. The outgoing signal F o q (w, m, ω o + δω) generated in response to this excitation is showing that, to first order, WS modes do not couple when changing the frequency; they simply acquire an extra phase delay δω Q qq (ω o ). While the discussion so far assumed all lines were identical, almost all methods and conclusions presented above continue to hold true when this condition is violated. Even when the lines support waves traveling at different speeds, WS modes still describe wave packets that simultaneously exit the system, even though they disperse after that (example: non TEM waveguides). III. WS THEORY: ELECTROMAGNETIC PERSPECTIVE The electromagnetic WS time delay matrix Q(ω) can be evaluated from knowledge of the fields at frequency ω that exist throughout the system for all possible port excitations. This section presents expressions for the entries of the WS time delay matrix Q(ω) for guiding (g), scattering (s), and radiating (r) systems. For guiding systems fed by TEM waveguides, Q(ω)'s entries are expressed as energy-like overlap integrals of the system's electric and/or magnetic fields; correction factors involving the system's scattering matrix and waveguide impedances are used when the system is fed through non-TEM ports. For scattering and radiating systems, a renormalization procedure that extracts the system's far-fields is introduced. A. Guiding Systems 1) Setup: Consider a lossless microwave network with perfect electrically conducting (PEC) walls that is terminated by homogeneous waveguides of uniform cross section. Let Ω and dΩ denote the networks volume and physical port surfaces, respectively. Next, consider the (global) curvilinear coordinate system (u, v, w) shown in Fig. 2. On dΩ, w = 0 and (u, v, w) is locally Cartesian. Letŵ denote the outward pointing normal to dΩ. The physical port surfaces are assumed far removed from waveguide discontinuities, so fields there can be expressed in terms of propagating modes. Each physical port supports one or more propagating modes; let M g denote the total number of propagating modes in all physical ports. Assume that the network is excited by an incoming unitpower field with transverse electric and magnetic components Here r = (u, v, w) and 1 ≤ p ≤ M g denotes the index of a TE, TM, or TEM propagating mode. The mode's transverse profile X p (u, v) is supported on dΩ p ⊂ dΩ. Note that dΩ p = dΩ p when mode p and p share the same physical port. In (27a)-(27b), Z p (ω), n p (ω) = Z p (ω), and β p (ω) are the p-th mode's impedance, power normalization factor, and propagation constant. A procedure to obtain X p (u, v), n p (ω), Z p (ω), and β p (ω) for arbitrarily shaped waveguides is outlined in Appendix A. These modes are automatically orthogonal and normalized to satisfŷ The X q (u, v) are purely real, i.e. X * q (u, v) = X q (u, v). Next, let E p (r, ω) and H p (r, ω) denote the field throughout Ω due to excitation (27a)-(27b), assuming all ports are matched. Near dΩ, these fields' transverse (toŵ) components can be expressed as where the outgoing transverse fields are The above construct guarantees that S is unitary. From here onwards, the ω dependence of quantities is suppressed. 2) Maxwell's equations: Consider two sets of electromagnetic fields inside Ω: where ε and µ are the frequency-independent permittivity and permeability of the medium inside Ω. The conjugate of Maxwell's equations for {E q (r), H q (r)} reads Adding the dot-product of (32a) and 1 2 E p (r) to the dot-product of (31a) and 1 2 E * q (r) yields Similarly, adding the dot-product of (32b) and 1 2 H p (r) to the dot-product of (31b) and 1 2 H * q (r) results in Subtracting (34) from (33) yields Finally, integrating the left and right hand sides (LHS and RHS) of (35) over Ω, applying the divergence theorem, and enforcing the boundary conditions on the network's PEC walls yields 3) WS Relationship: The evaluation of the LHS of (36) requires expressions for E p, (r), H p, (r), E * q, (r), and H * q, (r) on dΩ. Substituting (27a)-(27b) and (30a)-(30b) into (29a)-(29b), and differentiating w.r.t frequency, yields Likewise, substituting (27a)-(27b) and (30a)-(30b) into (29a)-(29b) with p → q, and complex conjugating the result yields Substituting (38a)-(38b) and (37a)-(37b) into the LHS of (36), and manipulating the resulting equation produces Therefore, the surface integral on the LHS of (36) can be computed given knowledge of scattering matrix and its frequency derivative. Next, denote the RHS of (36) as Fig. 3: Scattering system excited through free-space port defined on sphere of radius R. Substituting (39) and (40) into (36) yields In matrix form, (41) reads where is the WS time delay matrix for guiding systems, and D g is a diagonal matrix with (p, p)-th entry Equations (42)-(44) show that the entries of the WS time delay matrix for guiding systems are energy-like overlap integrals of the electric and magnetic fields that exist in Ω upon excitation of the systems' ports (diagonal elements are energies). Note that if all propagating modes are TEM, then Q g = Q because Z p becomes frequency independent. Knowledge of the fields at frequency ω throughout Ω for all possible port excitations therefore allows for the computation of both S and Q, and, via (42), S's frequency derivative. B. Scattering Systems 1) Setup: Consider a lossless scatterer that resides in free space and is circumscribed by a sphere of radius a centered at the origin (Fig. 3). Let Ω and dΩ denote the volume and surface of a concentric sphere of radius R a, respectively. On dΩ, fields interacting with the scatterer can be described in terms of M s = O((ka) 2 ) propagating TEM modes [26], [27]. Assume that the scatterer is excited by an incoming unit power field with (naturally transverse) electric and magnetic components Here r = (r, θ, φ),r is the radial unit vector, and 1 ≤ p ≤ M s denotes the index of a mode with transverse mode-profile There exists many possible choices for X p (θ, φ). A specific realization in terms of vector spherical harmonics is outlined in Appendix B. Note that (45a)-(45b) should not be confused with the "incident field" in scattering computations as the latter carries no net energy across dΩ. Next, let E p (r) and H p (r) denote the fields throughout Ω due to excitation (45a)-(45b). Near dΩ, these fields can be expressed as where the outgoing (automatically transverse) fields near dΩ are The above construct guarantees that S is independent of R and that (9a) holds, i.e. that S is unitary. 2) WS Relationship: To derive the WS relationship for scattering systems, once again consider two sets of fields: {E p (r), H p (r)} and {E q (r), H q (r)}. The above derivation (31a)-(36) for guiding systems continues to hold true witĥ w →r. The evaluation of the LHS of (36) requires expressions for E p, (r), H p, (r), E * q, (r), and H * q, (r) on dΩ. Substituting (45a)-(45b) and (48a)-(48b) into (47a)-(47b), and differentiating w.r.t. frequency yields Likewise, substituting (45a)-(45b) and (48a)-(48b) into (47a)-(47b) with p → q, and complex conjugating the result yields Substituting (49a)-(49b) and (50a)-(50b) into the LHS of (36) withŵ →r, and simplifying the result yields where Q qp is still given by (40). To arrive at a WS relationship that is independent of R, consider the quantity Q s qp,∞ obtained by replacing in (40) The quantities in the integrand in (52) are not the transverse tor components of {E p (r), H p (r)} and {E q (r), H q (r)}. Instead, they are the quantities in (47a)-(47b) evaluated for arbitrary r ∈ Ω. Substituting (45a)-(45b) and (48a)-(48b) into (47a)-(47b), and then evaluating (52) yields Subtracting (53) from both sides of (51) yields where Q s qp = Q qp − Q s qp,∞ . In matrix form (54) reads with where the domain of integration was changed from Ω to R 3 because the bracketed integrands converge rapidly. The above renormalization procedure is different from Smith's [1], who used an averaging scheme to render all integrals convergent. Instead, the procedure resembles that used in [21], [23] for expressing the energy stored in antenna fields. Just like for guiding systems, (55) and (56) show that entries of the WS time delay matrix for scattering systems are energy-like overlap integrals of fields that exist throughout Ω for all possible port excitations. The renormalization procedure in (56) extracts the time delay caused by the scattering process from the total time waves naturally dwell within Ω (which tends to infinity as R → ∞). Evaluation of S and Q at frequency ω once again permits the computation of S's frequency derivative via (55). C. Radiating Systems 1) Setup: Radiating systems are hybrids of the guiding and scattering systems considered in Secs. III-A and III-B. Consider a radiating system composed of lossless antennas that are fed by PEC waveguides of uniform cross section (Fig. 4). The antennas and their feeds reside in free space and are circumscribed by a sphere of radius a. Let Ω denote the volume of a concentric sphere of radius R a, and let dΩ denote the union of the concentric sphere's surface and the waveguides' physical ports. On dΩ, fields interacting with the system can be characterized in terms of M r = M g + M s propagating modes. Assume that the system is excited by an incoming unitpower field with transverse electric and magnetic components E i p, (r, ω) and H i p, (r, ω) near dΩ. If p ≤ M g , then E i p, (r, ω) and H i p, (r, ω) are given by (27a)-(27b). Likewise, if p > M g , then E i p, (r, ω) and H i p, (r, ω) are given by (45a)-(45b). Let E p (r, ω) and H p (r, ω) denote the electric and magnetic fields throughout Ω due to E i p, (r, ω) and H i p, (r, ω). Near dΩ, the total transverse electric and magnetic fields E p, (r, ω) and H p, (r, ω) are given by (29a)-(29b) with the transverse outgoing fields given by All modal quantities in (57a)-(57b) were defined in Sec. III-A and Sec. III-B. 2) The WS Relationship: The WS relationship can be derived by following the same procedure as in Secs. III-A and III-B. First, consider two sets of fields: {E p (r), H p (r)} and {E q (r), H q (r)}. The derivation in (31a)-(36) continues to hold true withŵ →r if r is on the spherical surface of radius R. Expressions for E p, (r), H p, (r), E * q, (r), and H * q, (r) on dΩ can be derived using (29a)-(29b), (27a)-(27b), (45a)-(45b), and (57a)-(57b). Substituting these expressions into (36) and simplifying the result yields where Q qp is still given by (40), andδ f = 1 if f is true and is 0 otherwise. Note that the second and third terms on the RHS of (58) are due to non-TEM waveguide modes and resemble the second and third terms on the LHS of (41). The final term on the RHS of (58) is proportional to R and resembles the second term on the RHS of (51). As in Sec. III-B, a WS relationship that is independent of R is obtained by introducing Q r qp,∞ , which is still given by (52). Substituting E p, (r) and H p, (r) into (52) and evaluating the resulting integral yields which is identical to (53) for the case of M g = 0. Using (59) and (58) results in where Q r qp is given by the same as expression as Q s qp in (56). In matrix form, (60) reads with the M g × M g diagonal matrix D g still given by (44). In (62), the scattering matrix S was decomposed into four blocks that separate the waveguide and free-space ports. Equations (61) and (62) show that the WS time delay matrix for radiating systems can be computed from knowledge of the fields throughout Ω due to all possible port excitations. Once again, knowledge of S and Q at frequency ω permits the computation of S , which in turn can be used to compute the frequency derivative of the antenna impedance matrix (see Sec. III-E below). D. Alternative Expressions for Q In the previous sections, the entries of the WS time delay matrix for guiding, scattering, and radiating systems were expressed in terms of integrals of both electric and magnetic fields over Ω. By manipulating Maxwell's equations (31a)-(31b) and their frequency derivatives (32a)-(32b), the following alternatives to (36) may be derived: The above expressions only require integration of electric or magnetic fields. Using (63a) instead of (36) to derive the WS relationship for guiding system results in and Likewise, using (63b) to derive the WS relationship for guiding systems yields and Note that (41) can be retrieved by adding (65) and (68). Similar WS relations involving Q e and Q h can be derived for scattering and radiating systems starting from (65) and (68). Expressions for Q involving only electric or magnetic fields are useful in many computational electromagnetics settings that model only one field type. E. Impedance Formulation WS relationship (1) can be used to obtain the frequency derivative of a system's impedance matrix. Recall that the M × M scattering matrix relates the amplitudes of incoming and outgoing waves a and b as The impedance matrix, on the other hand, relates ports voltages and currents v and Here, N and Y are diagonal matrices whose (p, p)-th entries are mode power normalization constants n p (or n) and admittances Z −1 p (or Z −1 ), respectively. Defining Z = N −1 ZNY, it follows from (70)-(72b) that Alternatively, S may be written in terms of Z as Taking the frequency derivative of (74) by applying the chain rule yields For radiating systems, the frequency derivative of the antenna impedance matrix is easily extracted from (77). IV. ILLUSTRATIVE EXAMPLES This section applies WS methods to the characterization of fields with well-defined time delays in guiding and scattering systems. It also demonstrates the use of equations (61) and (77) to compute the frequency derivative of antenna impedance matrices. The examples in this sections are merely illustrative in nature. While WS methods have applications in many branches of electromagnetics (see Sec. V), their treatment is beyond the scope of this paper. A. Guiding Systems 1) A PEC-Terminated Rectangular Waveguide: This first didactic example verifies the WS relationship (42) for an airfilled rectangular waveguide with dimensions a×b and length l that is terminated in a short. Since both S and S are diagonal, so is Q g . Assume the waveguide is excited by a unit-power TM mn mode. The total electric and magnetic fields inside the waveguide are where E p, (u, v, w) and H p, (u, v, w) are given by (29a) and (29b) with mode profile wave impedance Z p = β p η/k, propagation constant β p = k 2 − k 2 c , cutoff wave number k c = (mπ/a) 2 + (nπ/b) 2 , and a diagonal scattering matrix S with (p, p)-th entry In (78a), the longitudinal component of E p (r) reads Substituting (78a)-(78b) into (40) and evaluating the resulting integral yields where cos θ p = β p /k. The last two terms on the LHS of (41) are easily shown to equal the negative of the second term on the RHS of (82), yielding Using (83) and the fact that which follows from (80), it is easily verified that the WS relationship (42) holds true, i.e. that Q g pp = jS † pp S pp . Not surprisingly, the time delay Q g pp in (83) equals the time needed for light to travel the length of the zig-zag ray path from the port to the short and back, traditionally associated with the TM mn mode. The above analysis can be repeated for TE mn modes with identical results. 2) Notched Waveguide: Consider the waveguide shown in Fig. 5; due to the excitation described below, the structures height along v is immaterial. The system is excited by vpolarized electric fields with mode profiles and frequency f = 30 GHz. Physical ports 1 and 2 support 24 and 28 propagating modes, respectively. In contrast to the previous example, all modes couple; S and Q therefore are dense M g × M g matrices with M g = 52. S and Q are computed using a third order accurate finite element method, and the WS relationship holds true to 7 digits Q − jS † S / |Q| ≈ 10 −7 . Fig. 6 shows the electric field distribution E v,1 (u, w) throughout Ω due to excitation of physical port 1 by an incoming field with mode profile (85) with m = 1. Many different scattering phenomena are in play, causing the field to be devoid of obvious structure. Next, the WS time delay matrix is diagonalized (see (16)). Fields associated with select WS modes (port excitations specified by columns of W (see (16)) are shown in Fig. 7. The structured nature of these fields (relative to that in Fig. 6) is immediately apparent. Time delays (Q s diagonal elements) are converted to equivalent spatial shifts measured in centimeter (100 times the eigenvalue multiplied by the free-space speed of light) (Fig. 8). Four different delay/shift regimes are observed (a-d). a. WS modes 1-14. Fields originating in, and reflecting back to physical port 1. Consider WS mode #1 shown in Fig. 7a; this mode is characterized by a shift of 30.2 cm, or just over twice the distance from physical port 1 to wall A, representing the shortest possible path a field originating from either aperture can take before exiting the system. Other modes in this category, e.g. WS modes #3 and #5, share similar characteristics but experience slightly larger delays/shifts than WS mode #1 as they travel at a (small) angle w.r.t. the w-axis. b. WS modes [15][16][17][18][19][20][21][22]. Fields originating in physical port 1 and exiting though physical port 2 (and vice versa). Consider WS mode #15 shown in Fig. 7d; this mode is characterized by a shift of 48.0 cm, or 0.5 cm more than the structures physical length of 47.5 cm. After all modes originating in, and reflecting back to, physical aperture 1 have been exhausted, the shortest possible time a signal can dwell in the system is by traversing the entire cavity (from physical port 1 to 2, or vice versa), avoiding contact with walls A and B. Other modes in this category, e.g. WS mode #18, share similar characteristics but experience slightly larger delays than WS mode #15 as they travel at a (small) angle w.r.t. the w-axis. c. WS modes 23-40. Fields originating in, and reflecting back to, physical port 2. These modes are similar to WS modes Fig. 9a. The strip is 8 cm wide and centered about the origin (the location of the strip relative to the origin affects the WS time delays, as they are derived from scattering matrices defined on spheres/cylinders centered at the origin). The strip is illuminated by TM z fields at f = 30 GHz. The strips dense S and Q matrices are computed using an integral equation code considering M s = 100 excitations by cylindrical harmonics; because 100 2k (width of strip)/2 ∼ = 50, these excitations adequately resolve all the systems degrees of freedom. The WS relationship (55) is found to hold true to 5 digits Q − jS † S / |Q| ≈ 10 −5 . Very much like the fields due to a waveguide port excitation in Fig. 6, all cylindrical harmonics excite both surface and edge scattering phenomena, causing each of them to experience a different delay (as defined by (12)). Next, the WS time delay matrix is diagonalized (see (16)). Total fields (i.e. sums of incident and scattered fields) and the strips currents for select WS modes (port excitations specified by columns of W (see (16)) are shown in Figs. 9a-9f. Time delays (Q's diagonal elements) are converted to equivalent spatial shifts just as was done for the notched waveguide in Sec. IV-A2 (Fig. 10) by a (negative!) shift of −8.1 cm, or just over twice the distance from the strips edge to the origin. The total field and currents are concentrated near the strips edges; the center of the strip is virtually quiescent (the same to a large extent holds true for the incident and scattered field (not shown)). The total fields associated with this mode are concentrated along the strips axis; the two "beams" (coming in predominantly from the ±x-axis) that excite the strip reflect back upon hitting the edge, causing them to travel (roughly) 8 cm less (from the edge to the origin and back) than a field that does not interact with the strip. WS mode #1 is characterized by an even current distribution on the strip. WS mode #2 (not shown) is very similar to mode #1, except that it supports odd currents and fields. b. WS Geomeric Optics (GO)-like modes. The fields and current distributions of WS modes #3 and #16 are shown in Figs. 9b-9c and 9e-9f. These modes are GO-like in nature, and consist of beam-like incident fields that avoid the strips edges while exciting quasi-periodic currents on the strip, producing beams that specularly reflect away from the strip. These modes are characterized by very small time delays/spatial shifts as the GO rays round trip time is virtually identical to that experienced by a wave that does not interact with the strip. For each WS mode hitting the strip from the top, there is one hitting it from the bottom (not shown; while in principle these modes are degenerate, the symmetry was broken here by positioning the strip a very small distance above the origin). Note that the WS GO fields extremize group time delays (see Sec. II-C), in accordance with the generalized Fermat principle for optics and high-frequency electromagnetic fields [28]. 2) PEC Cavity: Consider the square cavity shown in Fig. 11a. The cavitys base is 8.24 cm long and contains a hole The WS relationship (55) is found to hold true to 4 digits Q − jS † S / |Q| ≈ 10 −4 . The WS time delay matrix is diagonalized (see (16)) and total fields and currents for select WS modes are shown in Figs. 11a-11h. Spatial shifts computed from Qs eigenvalues are shown in (Fig. 12). Three different delay/shift regimes are observed. a. WS corner modes. The total field and current of WS mode #1 are shown in Figs. 11a and 11e; just like for the strip, this mode is characterized by a negative spatial shift. The total field and currents are concentrated near the cavitys four corners; the cavitys facets and aperture are virtually quiescent. The cavity supports a total of four corner modes (inset in Fig. 12) with different current and field symmetries across the origin. b. WS GO-like modes. The fields and current distributions of WS modes #5 and #15 are shown in Figs. 11b-11c and 11f-11g. These modes are GO-like in nature and consist of beam-like incident fields that avoid the cavitys corners and aperture while exciting quasi-periodic currents on, and causing specular reflections from, its facets. These modes spend less time in the system than free space fields that do not interact with the system, causing them to be characterized by negative time delays/shifts. c. WS cavity modes. The total field and current of WS mode #120 are shown in Figs. 11d and 11h. This mode is excited by a beam focused on the cavitys aperture, exciting a strong quasi-resonant field in its interior. This field bounces back and forth many times between the cavitys walls, causing it to experience a large positive time delay before escaping. The cavity supports a total of four such modes, each characterized by a different aperture field distribution. All cavity modes are characterized by large currents near the aperture and virtually quiescent total fields on the cavitys exterior walls away from the aperture. The WS modes in region d are composed of fields that do not interact with the cavity. Indeed, because M s = 120 > 2k √ 2(width of cavity)/2, many cylindrical harmonics do not reach the cavity and their WS combinations do not experience any time delay. C. Radiating Systems Consider two strips with length 15 cm and width 0.5 cm that are separated by 15 cm. Both dipoles are center-fed by a voltage source. Matrices Q and S are computed using an integral equation code with M g = 2 TEM waveguide ports and M s = 198 scattering ports (many of which are not excited). Figure 13 shows the self and mutual input impedance of the two-element dipole array in the frequency band 0.5−2.5 GHz. The frequency derivative of the antenna impedance matrix is obtained in two ways: (i) using the WS relationship, i.e. by computing Q from the field in the system for all port excitations using (62), computing S using (61), and finally using (77) to compute Z , which contains a 2 × 2 block with the frequency derivatives of antenna port self and mutual impedances, and (ii) via a finite difference in frequency approximation. Fig. 14 shows the frequency derivative of the self and mutual input impedance. The WS method is observed to accurately predict the frequency derivative of the array's input impedance matrix. Specifically, the WS relationship was found to hold true to 6 digits over the entire frequency range Q − jS † S / |Q| ≈ 10 −6 . V. CONCLUSIONS AND AVENUES FOR FUTURE RESEARCH This paper presented a WS theory for electromagnetic fields. Following a review of basic WS concepts, closedform expressions for the entries of the WS time delay matrix involving energy-like overlap integrals of port-excited fields were presented. Furthermore, the nature of WS modes in guiding and scattering systems was elucidated, and the use of the WS time delay matrix for characterizing the frequency sensitivity of antenna impedance matrices was illustrated. Applications of the WS time delay concepts abound in electromagnetics. The authors foresee many more uses of WS methods, including • the design of multiport and distributed system that exhibit precise delays, including filter banks, antenna arrays, and (meta)materials; • the systematic phenomenological classification of fields interacting with guiding, scattering, and radiating systems, e.g. in the identification of scattering centers for radar cross section analysis; • the broadband characterization of multiport antenna systems; • the construction of fast frequency-sweep computational methods for characterizing broadband electromagnetic phenomena. Work on several of the above topics is in progress and will be reported in future papers. APPENDIX A WAVEGUIDE MODES Fields inside waveguides can be expanded in terms of TE, TM, and TEM modes. Consider the Helmholtz equation
2020-03-17T01:01:05.333Z
2020-03-16T00:00:00.000
{ "year": 2020, "sha1": "49789df582c1960dedd918611e769c43198e8726", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2003.06985", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "49789df582c1960dedd918611e769c43198e8726", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1783505
pes2o/s2orc
v3-fos-license
New devices in heart failure: an European Heart Rhythm Association report Developed by the European Heart Rhythm Association; Endorsed by the Heart Failure Association Several new devices for the treatment of heart failure (HF) patients have been introduced and are increasingly used in clinical practice or are under clinical evaluation in either observational and/or randomized clinical trials. These devices include cardiac contractility modulation, spinal cord stimula-tion,carotidsinusnervestimulation,cervicalvagalstimulation,intracardiacatrioventricularnodalvagalstimulation,andimplantablehemodynamicmon- itoringdevices.Thistaskforcebelievesthatanoverviewonthesetechnologiesisimportant.SpecialfocusisgiventopatientswithHFNewYorkHeart Association Classes III and IV and narrow QRS complex, who represent the largest group in HF compared with patients with wide QRS complex. An overview on potential device options in addition to optimal medical therapy will be helpful for all physicians treating HF patients. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Keywords Introduction The Committee for scientific documents of the European Heart Rhythm Association felt, that several new devices for the treatment of heart failure (HF) patients have been introduced and are increasingly used in clinical practice or are under clinical evaluation in either observational and/or randomized clinical trials. So far, most of these technologies, if not all, have not been addressed in guidelines or position papers. Recently, several expert consensus documents and guidelines on device therapy in HF have been published 1 followed by the publication of the 2012 EHRA/HRS consensus statement and new 2013 ESC Guidelines on cardiac pacing and cardiac resynchronization therapy in 2013 2,3 . Therefore, this position paper does not include any recommendation on cardiac resynchronization therapy (CRT) in heart failure. Recommendations should represent evidence-based medicine. However, except for cardiac contractility modulation (CCM), no data based on randomized trials are available, and for none of the new devices clinical outcome data, in particular mortality data, are available. Nevertheless, this task force believes that an overview on these new technologies is important and necessary. Special focus is given to patients with HF New York Heart Association (NYHA) Classes III and IV and narrow QRS complex, who represent the largest group in HF compared with patients with wide QRS complex. Therefore, the task force believes that an overview on potential device options in addition to optimal medical therapy (OMT) would be helpful for all physicians treating HF patients. Cardiac contractility modulation Cardiac resynchronization therapy improves symptoms, quality of life (QoL), exercise tolerance, and reduces hospitalizations in patients with advanced HF and prolonged electrical activation (i.e. increased QRS duration). 4,5 The results of a recent study showed that patients with mechanical dyssynchrony detected by tissue Doppler imaging but a normal QRS duration did not benefit from CRT. 6 While the results of outcome trials of CRT in patients with a normal QRS duration are negative including Echo-CRT, a QRS ≥ 120 ms remains as one of the criterion for selecting patients for CRT. Patients with a QRS duration .120 ms represent 31% of HF patients in a recent trial. 7 The indications for CRT have been summarized in a recent update of ESC guidelines on devices in heart failure. Although CRT is indicated in patients with a prolonged QRS duration, up to 60% of patients with HF have a normal QRS duration. 8 According to European registries published recently, even in the broad QRS population, those cared in hospital cardiology setting and meeting the current indications to CRT therapy were 6%, and among them approximately one-third only were actually implanted with a CRT device. Therefore, although the last ESC guidelines on HF extended the indications to CRT in chronic heart failure, a substantial proportion of patients remain either not eligible for CRT or simply do not respond. 9,10 At a time when pharmacological therapy for HF has made only little advances, it is appropriate to explore whether new device-based therapies have anything else to offer to patients with heart failure. Background Cardiac contractility modulation signals are non-excitatory signals which, when applied during the absolute refractory period, enhance the strength of left ventricular (LV) contraction and improve exercise tolerance as well as QoL in patients with heart failure. 11,12 As the signals influence cell function without any affecting activation sequence, the effects are independent of QRS duration and should be additive to those of CRT. Concept of cardiac contractility modulation Cardiac contractility modulation signals are electrical impulses delivered during the absolute refractory period ( Figure 1). Cardiac contractility modulation signals used in clinical practice are delivered 30 ms after detection of the QRS complex onset and consist of two biphasic +7 V pulses spanning a total duration of 20 ms. These signals do not elicit a new action potential or contraction, as is the case with extra-or post-extrasystolic contractions. Moreover, they do not affect the sequence of electrical or mechanical activation, nor do they recruit additional contractile elements. On this basis, CCM signals are referred to as 'non-excitatory'. Implantation technique Cardiac contractility modulation signals are provided by a pacemaker-like impulse generator (OPTIMIZER III, IMPULSE Dynamics) that connects to the heart via two standard active-fixation leads placed endocardially on the right ventricular (RV) septum ( Figure 2). An additional right atrial lead is used to detect the timing of atrial activation which, upon detection by an algorithm, ensures appropriate timing of CCM signal delivery without the risk of inducing ventricular arrhythmias. Cardiac contractility modulation pulses contain 50-100 times the amount of energy delivered in a standard pacemaker impulse and are, therefore, readily identified by body surface electrocardiography (Figure 1). Aside from the superimposed electrical artefact, there is generally no significant effect on the underlying ECG signal. Within several minutes of the acute CCM signal application, a mild increase in ventricular contractile strength can be detected as indexed by increases in LV pressure (LVP) and the rate of rise of LVP (LV dP/dt max ). 12 -16 Haemodynamic responses to acute CCM signal application are measured with a micromanometer catheter (Millar Instruments) placed in the LV. An increase in the maximum rate of LVP rise (dP/dt max , an index of systolic function) of at least 5% is defined as a significant response to CCM. If such changes cannot be achieved, even after repositioning of the electrodes, the device is not implanted. The acute change dP/dt max is independent of QRS duration. 17,18 Moreover, in patients with prolonged QRS duration, the acute contractile effects of CCM are additional to those of CRT, as expected from a therapy that exerts its effects on substrates other than dyssynchrony. 17 The impact of a HF therapy on myocardial energetics is an important factor to consider in long-term safety and efficacy. Therefore, the acute effects of CCM on myocardial energetics were investigated in a clinical study in which myocardial oxygen uptake (MVO 2 ) and LV dP/ dt max were measured in nine patients exposed to acute CCM signals. 19 In this study, acute CCM was associated with an increase in dP/dt max from 630 to 800 mmHg/s (an 20% increase). Despite an acute increase in contractility, there was no detectible increase in myocardial oxygen consumption. These data were compared with those of Nelson et al., 20 who had also measured MVO 2 and dP/dt max during temporary RV and biventricular pacing (BIV) pacing. In this study, contractility was increased first by applying CRT and then dobutamine to achieve a comparable increase in dP/dt max . As expected from its effects on calcium cycling, dobutamine increased both dP/dt max and MVO 2 . In contrast, CCM increased dP/ dt max but not MVO 2 . Thus, like CRT, the acute, modest increase in contractility achieved by CCM does not increase energy demands. In addition to acute haemodynamic effects noted above, improvement in global ventricular function has also been reported during chronic CCM signal application. 21,22 In one study, 30 patients with ejection fraction (EF) , 35% and NYHA III symptoms despite OMT underwent three-dimensional echocardiography at baseline and 3 months after CCM treatment. 22 Left ventricular ejection fraction (LVEF) increased by 4.8 + 3.6% and LV end-systolic volumes decreased by 11.5 + 10.5%, respectively. These findings indicate that LV reverse remodelling could be achieved by CCM in the background of optimum medical therapy. Mechanisms of action Several recent studies have sought to define the mechanisms by which CCM signals impact on regional and global myocardial function. Early studies had shown that electromagnetic fields can impact on protein-protein interaction and gene expression. 23 Since the CCM-induced increases in contractility are not associated with an increase in MVO 2 , CCM signals may have a direct impact on cellular physiology beyond typical acute effects on calcium handling that underlie pharmacological inotropic effects. To explore this hypothesis, myocardial samples were initially obtained for molecular (northern blot) and biochemical (western blot) analyses from an animal model of HF in both acute and chronic studies. 15,16 Samples were taken from the interventricular septum (near the site of CCM signal delivery) and in a remote area on the LV free wall. Emphasis was placed on genes and proteins of high abundance whose tissue content were known to be significantly altered in heart failure. It was demonstrated that one of the most rapid effects of CCM is that near the site of signal delivery. There is, within minutes, increases in phosphorylation of phospholamban, a key protein that modulates the activity of sarco-endoplasmic reticulum calcium ATPase type 2a (SERCA2a) which in turn modulates calcium handling by the sarcoplasmic reticulum. 16 Shortly thereafter, changes in gene expression can be demonstrated. For example, the expression of SERCA2a was decreased in untreated HF animals in both the interventricular septum ('near') and remote from the LV free wall ('remote'). For tissue obtained from animals with acute (4 h) CCM treatment, SERCA2a expression increased in the region near the site of CCM signal administration, but not in the remote region. After 3 months, however, SERCA2a expression was improved in both near and remote regions. These findings were reproduced by other genes whose expression is decreased in chronic heart failure. Brain natriuretic peptide (BNP) expression was also examined but in this case, BNP was overexpressed in untreated heart failure, decreased acutely only in the region near the CCM pacing site, and decreased in both the near and remote sites with chronic CCM treatment. The fact that gene expression is improved in the short term only near the area of treatment implies that the effects of CCM treatment are local and direct. However, in the long run, where expression is improved in both near and remote sites, two possible factors may contribute. First, changes in gene expression in remote areas may be secondary to the global haemodynamic benefits provided by chronic regional CCM treatment. Alternatively, there may be some direct effect that is transmitted to remote sites via gap junctions. These findings concerning the molecular effects of CCM treatment in an animal model of HF were confirmed in a study with right endomyocardial biopsies performed in patients with heart failure. Most interestingly, improvements in gene expression correlated with improvements in peak VO 2 and Minnesota Living with Heart Failure Questionnaire (MLWHFQ). 24 These correlations were present for patients with coronary artery disease (CAD) and reduced LV function and in patients with idiopathic cardiomyopathy. These findings indicate that CCM treatment reversed cardiac maladaptive foetal gene programme and normalized expression of key sinus rhythm (SR) Ca 2+ cycling and stretch response genes. Chronic signal application in heart failure patients Following three small pilot studies of chronic CCM signal application, 21,25,26 a multicentre randomized, double-blind, doublecrossover study was performed in HF patients with EF ≤ 35% and NYHA Class II or III symptoms despite OMT (the FIX-HF-4 study). 27 One hundred and sixty-four subjects with EF , 35% and NYHA Class II (24%) or III (76%) symptoms received a CCM pulse right side) and an one-chamber ICD device (left side). The CCM system consists of a can and three pace-sense electrodes. Two of them are fixed to the mid-septum and the third one to the right atrium. New devices in heart failure: an European Heart Rhythm Association report generator. Patients were randomly assigned to Group 1 (n ¼ 80, CCM treatment 3 months, sham treatment second 3 months) or Group 2 (n ¼ 84, sham treatment 3 months, CCM treatment second 3 months). The co-primary endpoints were changes in peak oxygen consumption (VO 2,peak ) and MLWHFQ. Baseline EF (29.3 + 6.69% vs. 29.8 + 7.8%), VO 2,peak (14.1 + 3.0 vs. 13.6 + 2.7 mL/kg/min) and MLWHFQ (38.9 + 27.4 vs. 36.5 + 27.1) were similar between groups. VO 2,peak increased similarly in both groups during the first 3 months (0.40 + 3.0 vs. 0.37 + 3.3 mL/kg/min), suggesting a placebo effect. During the next 3 months, however, VO 2,peak decreased in the group switched to sham (20.86 + 3.06 mL/kg/min) and increased in patients switched to active treatment (0.16 + 2.50 mL/kg/min). At the end of the second phase of the study, the difference in peak VO 2 between groups was approximately 1 mL/kg/min. After performing statistical testing for carryover effects and period effects data from both study phases could be formally combined to arrive at a net treatment mean (+SD) treatment effect on VO 2peak of 0.52 + 1.39 O 2 /kg/min (P ¼ 0.032). Quality of life, assessed using the MLWHFQ, behaved similarly, trending only slightly better with treatment (212.06 + 15.33 vs. 29.70 + 16.71) during the first 3 months, again consistent with a large placebo effect. During the second 3 months, MLWHFQ increased in the group switched to sham (+4.70 + 16.57) and decreased further in patients switched to active treatment (20.70 + 15.13) (a reduction in scores denoting an improvement in QoL). As was the case for VO 2peak , formal statistical testing of data from a cross-over study design confirmed that these differences represent a statistically significant positive treatment effect (P ¼ 0.030). Serious cardiovascular adverse events (AEs) were tracked carefully in both groups. The study met its primary endpoint using the formal analysis of a cross over study design; both exercise tolerance and QoL improved significantly during the on periods compared with the off periods. The most frequently reported AEs were episodes of decompensated heart failure, atrial fibrillation (AF), bleeding at the OPTMIZER System implant site and pneumonia. Importantly, there were no significant differences between ON and OFF phases in the number or types of AEs. The second randomized study of CCM was a multicentre study involving 428 patients recruited from 50 sites in the US (FIX-HF-5 study). 28 Patients were characterized by NYHA Class III (89%) or IV (11%), QRS duration averaging 101 ms and EF averaging 25%. Patients were required to be receiving stable optimized medical therapy, defined as a beta-blocker, angiotensin converting enzyme-inhibitor or angiotensin receptor blocker and a diuretic for at least 3 months (unless intolerant); the daily dose of each medication could not vary by more than a 50% reduction or 100% increase over the prior 3 months. Patients were randomized (1 : 1 and stratified for ischaemic or non-ischaemic underlying aetiology) to OMT plus CCM (n ¼ 215) or OMT alone (n ¼ 213). The primary safety endpoint was a test of non-inferiority between groups at 12 months for the composite of all-cause mortality and all-cause hospitalizations (12.5% allowable delta). Efficacy was assessed by changes in exercise tolerance and QoL at 6 months compared with baseline. Exercise tolerance was indexed by ventilator anaerobic threshold (VAT, which was the declared primary endpoint) and by peak VO 2 (pVO 2 ). Quality of life was assessed by the MLWHFQ. Furthermore, the primary analysis of these endpoints was a 'responders analysis' 29 which was between-group comparison of the percent of patients whose VAT increased by ≥20%. The study groups (OMT vs. CCM) were comparable for age (58 + 13 vs. 59 + 12 years), congestive heart failure (CHF) aetiology (67 vs. 65% ischaemic aetiology), QRS duration (101 + 0.5 vs. 101 + 0.6 ms), EF (26 + 7 vs. 26 + 7%), VAT (11.0 + 2.2 vs. 11.0 + 2.2 mL/kg/min), pVO 2 (14.7 + 2.9 vs. 14.8 + 3.2 mL/kg/min), and other important baseline characteristics. The safety endpoint of the study was met; by the end of 1-year followup, 52% of patients in the treatment group and 48% of patients in the control group met a study-specified safety endpoint, which was noninferior by both a Blackwelder's test of non-inferiority and by a log-rank test comparing Kaplan -Meier survival curves. Regarding efficacy, pVO 2 was 0.7 mL/kg/min greater (P ¼ 0.024) and MLWHFQ was 9.7 points better (P , 0.0001) in the treatment group than in the control group. However, the primary endpoint of this study was not reached. There was no difference for VAT between the groups (neither based on a responders analysis or a comparison of mean changes between the groups). The study protocol indicated that efficacy effects would be explored in specific patient subsets. 30 This analysis showed that particularly large effects on both VAT and pVO 2 were observed in patients with a baseline EF ≥ 25% and NYHA Class III symptoms. In this subgroup (which consisted of 97 OMT and 109 CCM group patients, nearly half of the entire study cohort), VAT was 0.64 mL/kg/min greater (P ¼ 0.03), pVO 2 was 1.31 mL/kg/min greater (P ¼ 0.001), and MLWHFQ was 10.8 points better (P ¼ 0.003) in the treatment group than in the control groups. Regarding the results of the 'responders analysis' 5.8% of OMT and 20.5% of CCM patients exhibited a ≥20% increase in VAT, a difference of 14.7%, at 6 months (P ¼ 0.007). Furthermore, in this subgroup, the effects on exercise tolerance and QoL were sustained through the entire 12-month follow-up period of the study. This finding has interesting implications related to the purported mechanisms of action. Cardiac contractility modulation signals are applied to one region of the heart and are believed to have direct and relatively rapid local effects. It is hypothesized that secondary remote effects are achieved, over time, when the local effect is large enough to have a sufficient effect on global function. Heart size increases as EF decreases; it can therefore be speculated that the larger the heart the less the impact on global function and the less effective the therapy. To further test this hypothesis, an additional subgroup analysis was performed. There were 38 patients in the FIX-HF-5 study with an EF ≥ 35%. These patients were admitted to the study because the EF determined at the investigative site was ,35%; however, all analyses were based on the core lab EF assessment. Eighteen of the patients were in the treatment group and 20 patients were in the control group. In this subgroup, efficacy parameters were even more greatly improved by CCM: peak VO 2 was 2.96 mL/kg/min greater (P ¼ 0.03), VAT was 0.57 mL/kg/min greater (P ¼ ns), and MLWHFQ score was 18 points better (P ¼ 0.06) in the CCM group compared with the OMT group. Although not all of these differences were statistically significant in view of the small sample size, the trends suggest greater effects than in the larger subgroup of patients with EF ≥ 25% (Tables 1 and 2). Longterm results from the observational study in 54 patients with advanced HF have shown no adverse effect of CCM on long-term survival 31 . Combining cardiac contractility modulation with cardiac resynchronization therapy As discussed above, CCM signals applied in the acute setting to HF patients simultaneously receiving CRT provide additive effects on LV contractility indexed by dP/dt max . 17 In view of the fact that symptoms persist in more 30% of patients with prolonged QRS duration receiving CRT, it can be postulated that addition of CCM treatment may provide an option for these patients. The initial experience of combining CCM in a CRT non-responders has been published 32 . It was demonstrated that the implantation procedure is technically feasible, that the OPTIMIZER and CRT defibrillator (CRT-D) devices can coexist without interference and that acute haemodynamic and clinical improvements can be observed. These preliminary results, however, have to be confirmed in a prospective study that is Clinical perspectives Two large-scale studies have validated the safety and suggested the effectiveness of CCM therapy. Results of these trials show that CCM improves exercise tolerance as indexed by peak VO 2 . Other indices of exercise tolerance (e.g. 6 min hall walk) and QoL (NYHA class and MLWHFQ) have also been shown to improve. Data from both clinical trials suggest that mortality rates are unaffected by CCM therapy. However, the aim of these studies was to demonstrate an absence of increase in mortality as a safety end-point and was not powered to show a mortality benefit. The level of recommendation would be substantially higher if a gain in mortality can be demonstrated. In this respect, a specific subgroup of HF patients (EF ≥ 25%) is more likely to respond favourably. Moreover, more research is probably required to determine the optimal pacing configuration (single or biphasic stimuli, optimal delay from the pacing spike, optimal duration of each phase, and optimal amplitude of the signal), the optimal daily duration of application, the optimal localization of the pacing sites, and the optimal number of pacing sites to gain the maximal benefit from this therapy. The potential interest of CCM in early stages of HF has not been investigated. Required technical improvement Some technical limitations should be solved in the future: (i) with the last version of the device, CCM cannot be delivered in patients with AF or frequent ectopy, as it is designed to inhibit CCM delivery on arrhythmias and relies on detection of a P-wave. A future device is supposed to incorporate an algorithm that does not rely on P-wave detection and therefore could be used in patients with AF. (ii) The development of a device combining CCM with implantable cardioverter defibrillator (ICD) functions would be desirable in this population of HF patients. Future studies will be required to define whether CCM is additive to CRT pacemaker or CRT-D in patients with wide QRS. This would be facilitated by the development of a single device that incorporates pacing, antitachycardia therapies, and CCM. (iii) A simple peri-implantation method to guide lead positioning would be desirable, beyond the invasive LV dP/dt max measurement. Neuromodulation Heart failure is associated with significant perturbances of the autonomic balance with predominant sympathetic activation over the parasympathetic system. 33 -35 Several indirect markers of cardiac vagal tone like heart rate variability, heart rate turbulence, baroreflex sensitivity are suppressed in CHF which in turn is predictive for a worse outcome of CHF. 36,37 Likewise, an increase of cardiac vagal tone by pharmacological b-receptor blockade has substantially improved functional status and survival of HF patients. 38 Cardiac vagal tone is controlled by pre-ganglionic parasympathetic cardiac neurons mainly residing in the nucleus ambiguus and the dorsal motor nucleus of the brainstem. Of note, parasympathetic cardiac efferent neurons in the nucleus ambiguus are intrinsically electrically silent, 39 thus needing presynaptic input to generate and modulate parasympathetic efferent activity and tone. Such input is likely provided by sensory afferent fibres from arterial baroreceptors or respiratory sensory neurons, 40 but spinal cord afferent fibres may also connect to the brainstem and modulate cardiac vagal tone ( Figure 3). 41 Thus, attempts to therapeutically increase the cardiac parasympathetic tone by electrical neural stimulation may operate at any part of this integrative circuit. This has led to four neurostimulation approaches, two of which are focusing on reflex activation of the parasympathetic tone and sympathetic inhibition via afferent fibre stimulation [spinal cord stimulation (SCS), carotid sinus nerve stimulation], while two are concentrating on efferent parasympathetic stimulation (cervical vagal and intracardiac vagal stimulation). Spinal cord stimulation Background Preliminary work suggests some benefits from neuromodulation with SCS in HF patients. 42 Spinal cord stimulation has been used for over 40 years in the management of chronic intractable pain. 43 According to the American Association of Neurosurgical Surgeons (aans.org), as many as 50 000 neurostimulators are implanted worldwide every year. Among multiple indications, the benefits have been confirmed in patients with refractory angina 44 associated with endstage CAD 45 -48 and in the absence of CAD (syndrome X) 49,50 Positive effects have also been demonstrated in the peripheral vascular beds: patients with severe pain due to distal atherosclerosis, 51,52 in Raynaud's disease, and at the level of cerebral vasculature. 53 -55 Figure 3 Parasympathetic and sympathetic innervations of the heart. Functional anatomy: efferent pre-ganglionic vagal nerve (parasympathetic) fibres (green colour) course from the brain stem towards the heart and connect with post-ganglionic cells in circumscript epicardial ganglionic plexus. Sympathetic pre-ganglionic fibres (orange colour) switch to post-ganglionic fibres inside the stellate ganglion. Major afferent inputs to the central autonomic centres are the carotid sinus nerves (brown colour) and spinal cord afferents (black colour). (Please also see text for detailed description of function.) Spinal cord stimulation compared positively with surgical 56 and laser endo-myocardial revascularization 48,57 in patients with severe refractory angina associated with severe CAD without any increase in adverse ischaemic events. This suggests that the improvement in symptoms and the clinical condition of these patients was mediated only partially through the 'gate-control' mechanisms described by Melzack and Wall in 1965. 58,59 The involved mechanisms remain incompletely understood and appear to be far more complex than just the suppression of the nociceptive influx associated with myocardial ischaemia: 59,60 SCS seems to affect the balance between oxygen demand and supply. Since there is little evidence that SCS improves the coronary blood flow in ischaemic patients, 61 -63 other factors acting in the balance between oxygen requirements and supplies have to play an important role. Spinal cord stimulation applied at the low cervical (C7 -C8) and/or high thoracic (T1-T6) level exerts effects through reflex activation of the vagus nerve and sympathetic ( Figure 4). 64 Overall, the cumulated effects on both the sympathetic and the parasympathetic system seems to be a re-equilibrium of the balance in favour of the latter. 65 -72 Other positive effects via the cytokines and the NO/NOS system 73 -75 have been identified and there may even be some direct protective mechanisms on the myocardium during ischaemia. 64,72,76 Despite, all that is known from the abundant data accumulated over the last 30 years on ischaemic patients (and animals), very little is known about the effects of SCS on the systolic function of the LV. In 1993, Kujacic et al. found by echocardiography, that the LVEF decreased less with adenosine infusion under SCS compared with the control situation in 15 patients with severe multivessel CAD. Interestingly, the baseline LVEF was also slightly higher at rest under SCS (48%) than without SCS (44%) (P value not provided). 77 In 2001, Gersbach et al. 78 showed improved cardiac output, reduced peripheral resistance, and increased cardiac work efficiency, therefore decreasing oxygen myocardial demand. More recently, Issa et al. found in a canine model of HF that SCS reduced the risk of ischaemic ventricular arrhythmias. Interestingly, they also found, in accordance with the previous reference, that SCS reduced the sinus rate and the systolic blood pressure (consistent with anti-sympathetic effects). 79 In 2009, the same group confirmed the long-term (5-10 weeks) beneficial effects of SCS in the same animal model. Compared with medical treatment or to controls, SCS resulted in a significant improvement of LVEF, clinical parameters (blood pressure and body weight), serum levels of BNP and norepinephrin, and fewer episodes of non-sustained VTs detected by the implanted ICD. 80 Liu et al. used nine adult pigs in which HF was induced by myocardial infarction and 4 weeks of rapid pacing. The animals were studied 24 h after rapid pacing was turned off. Haemodynamic and echocardiographic data were collected at baseline (HF), after two sets of 15 min of SCS separated by 30 min of recovery. They found that SCS, in this acute model, significantly improved LVEF and maximum positive dP/dt while decreasing myocardial oxygen consumption. By echocardiography, the benefits in terms of LV function were present both globally and regionally, i.e. associated with better intra-ventricular synchrony assessed by speckle tracking. 81 Finally, to our knowledge, there is only one recent report in patients. In 2009, at the HF Society meeting, Jesus et al. 42 presented their initial experience with SCS in four patients with advanced HF: all patients were reported to have improved clinically and three increased the distance they covered during the 6 min walk test. Technology and implant technique The North-American Neuromodulation Society (NANS) has established guidelines concerning the training requirements for implantation and follow-up of SCS devices. 82 The technique is well described. 83 -86 The procedure is performed under sedation with the patient lying prone on an X-ray compatible table allowing anteroposterior and lateral views. The peridural space is accessed at the L1 -L2 level with a needle. The lead(s) is (are) advanced under fluoroscopic guidance up to the desired level in a posterior and median/ para-median position close to the dorsal horn fibres ( Figure 4). The impulse generator is usually implanted subcutaneously in the low back or high buttock with tunnelling of the leads down to that region. Complications are observed in up to 38% of the patients. 51,83,87,88 Longer-term lead migration or breakage or other problems with the impulse generator can happen in 20-30% of the patients with a need for surgical correction in most of the cases. There are case reports of patients who were implanted with pacemakers 89 -92 or ICD's 93 -95 and received a spinal stimulation system. Although electromagnetic interference is possible, using SCS is safe in these patients provided the sensing of the cardiac device is programmed in the bipolar mode, that the sensitivity is set at 0.3 mV and that the SCS is implanted contra-laterally and at a sufficient distance. 96 Verification of the cardiac device is nevertheless advised to minimize the risks. Interestingly, their experiments with dogs, Lopshire et al. 80 implanted 60 animals with both systems (SCS and ICD) and no interference was noted. Two companies produce the two types of devices and neither company cites SCS as contra-indicated in the presence of an implanted cardiac device. Of note, a SCS device could nevertheless be damaged by ICD discharges, probably due to important muscular contraction. 95 New devices in heart failure: an European Heart Rhythm Association report Spinal cord stimulation, via complex mechanisms, appears promising in pre-clinical experiments to improve the systolic function of the LV and to decrease the ventricular arrhythmias associated with this condition. According to Clinicaltrials.org, there are at the present time two ongoing clinical studies that are directly addressing the roleand possible benefits-of SCS in severely afflicted HF patients. SCS-HEART is a non-randomized feasibility study that aims to determine the safety of SCS in 20 patients with a LVEF between 20 and 35%, in NYHA functional class III, and who already have an ICD implanted. Results should be available in the second part of 2014. On the other hand, DEFEAT-HF (sponsored by Medtronic) is a single blind randomized study involving similar patients but that will compare SCS 'ON' vs. 'OFF' in 250 recipients. Results are also expected for the end of 2014. Until these studies are completed, it remains impossible to recommend this therapy in HF patients outside of the current recognized indications ( Table 3). Carotid sinus nerve stimulation Background Baroreceptors are embedded into the wall of arterial vessels and can be preferentially found in the aortic arch at the origins of the brachiocephalic or left subclavian artery, in the brachiocephalic artery at its bifurcation into the right subclavian and right common carotid artery, in the carotid sinuses, along both common carotid arteries, and in both common carotid arteries at the origin of the superior thyroid artery. 97 They respond to changes in arterial pressure and mediate their signals via afferent rapid conducting (2.5 -60 m/s) myelinated A fibres and slow conducting (,2.5 m/s) non-myelinated C fibres to the brainstem 98 where they connect to efferent vagal neurons. A-type receptors respond to normal arterial pressures and are regularly discharging at high frequency (.100 Hz) synchronously to the pulsatile pressure wave. 99 By contrast, C-type baroreceptor respond to higher threshold values of the mean arterial pressure with irregular firing at lower frequency of 20-30 Hz. 99 Increases of arterial blood pressure elicit a reflectory activation of efferent vagal fibres resulting in a decrease of the sinus heart rate. 99 The baroreflex activation is considered as powerful contributor to the baseline parasympathetic tone. 100 During human HF, this baroreflex is profoundly suppressed 101 -104 and worsens with deterioration of CHF. 105 Thus, electrical stimulation of afferent carotid sinus nerves, which connect baroreceptors to the brainstem may evolve as tool for increasing cardiac vagal tone during HF. Mechanisms of action Electrical stimulation of the baroreceptor fibres (carotid sinus nerve fibres) elicits a graded response decrease of heart rate and arterial pressure 106 via parasympathetic efferent activation and sympathetic withdrawal and has recently been clinically introduced for treatment of resistant arterial hypertension. 107 -110 The potential of electrical carotid sinus nerve stimulation to shift the autonomic balance towards a higher parasympathetic tone has also been investigated in HF models. In fact, chronic low-intensity carotid sinus stimulation, which lowers blood pressure by only 10 -15 mmHg and which does not critically decrease heart rate was able to improve survival in dogs with pacing-induced HF. 111 In addition, LV systolic and diastolic function improved due to reverse LV geometric and interstitial remodelling. 112 In parallel, the downregulation of b1 receptor density in CHF was almost normalized to control levels and serum catecholamine levels declined which may be taken as evidence for a direct or reflectory decrease of the systemic sympathetic tone in these models. 112 Technology and implant technique The first systems consisted of an implantable battery powered pulse generator that was placed subcutaneously in the pectoral region as well as two leads with finger-shaped bipolar electrodes that are surgically implanted circumferentially around each carotid sinus. The electrode wires (leads) were tunnelled subcutaneously to the impulse generator near the collarbone. The impulse generator delivered, depending on the system programming, chronic activation energy with individual adjustable frequency, amplitude, and pulse width to the left and right carotid sinus. The device delivered rectangular pulses (3, 5, or7 V while keeping constant current). The size of the finger electrodes and the need to wrap around the vessel increased the risk of causing injury to the surrounding nerves. The next generation of BAT-the Neo system-was thus developed. However, the implantable pulse generator ( Figure 5) provides extended battery longevity (longevity of Rheos system 1 year, Neo system 3 years) and a smaller size pulse generator. Instead of two leads, the Neo system consists of only one lead that requires less dissection of the carotid artery for implantation. The iridium oxidecoated unipolar electrode is flat and disk-shaped with a 6 mm diameter (providing stimulation currents of 2, 5, or 8 mA while keeping the stimulation voltage constant). The implant procedure can be done in local anesthaesia with sedation. Table 3 Major findings reported with SCS Spinal cord stimulation has been shown to have positive impact on cardiac ischaemia due to CAD: 1. The mechanisms involved go beyond suppression of the nociceptive stimuli 2. SCS has limited impact on coronary blood flow, therefore the mechanisms must lie somewhere else 3. SCS has a profound positive impact on the cardiac sympathetic/ para-symapathetic balance 4. SCS also affects positively the NO/NOS and cytokines system at the myorcardial level Spinal cord stimulation has shown some effects on LV function and ventricular arrhythmias: 1. One preliminary report on four HF patients who improved clinically with SCS 2. SCS was associated with smaller deterioration of LVEF due to adenosine administration in patients with multivessel CAD 3. SCS was associated with better invasive haemodynamic evaluation in patients with normal LV function 4. SCS was associated with reduced risk of ventricular arrhythmias and LV function improvement in animal models of HF Clinical data The Rheos system was the first baroreceptor stimulation system by CVRx implanted in hypertensive patients. The Rheos system was so far evaluated in three multicentre clinical studies in patients with resistant hypertension: the DEBuT-HT/DEBuT-HET in Europe 110 and the Rheos Feasibility Trial and Rheos Pivotal Trial 107 which mostly included patients in the USA. The Rheos pivotal trial is in longterm follow-up by now. 111 The randomized part of the trial with a 12-month blinded follow-up was published in 2011: 107 265 patients with resistant hypertension received the device and were either randomized to immediate BAT or delayed BAT following the 6-month visit. Three of the five pre-specified co-primary endpoints were met: long-term sustained response to baroreceptor activation therapy, with 88% responders after 12 months of follow-up, incidence of short-term AEs (6-month follow-up), and long-term AEs (12-month follow-up). The study did not meet the endpoints for acute responders (54 vs. 46% responders after 6-month follow-up) and procedural safety (event-free rate 74.8%). A majority of these events were related to the carotid sinus lead placement and involved transient or permanent nerve injury that occurred during the implant. The majority (76%) of procedure-related AEs resolved completely. To date, there exist no randomized clinical data evaluating the effectiveness of BAT in patients with systolic HF. In addition to experimental data, 111 -113 several sub-studies and single-centre data indicated beneficial physiological effects of BAT beyond just blood pressure reduction. 114 -116 Clinical perspectives The CVRx Neo system has received the CE mark in the fourth quarter of 2011. With the Neo system, a verification study for the treatment of patients with resistant hypertension was conducted in Europe and Canada. Thirty patients had a 6-month follow-up and showed average systolic blood pressure reduction of 26 mmHg. In the USA, the HOPE4HF study 117 is planned with the primary endpoint assessing the effects of BAT on HF hospitalizations in patients with HF and preserved EF. In patients with systolic HF, the randomized XR-1 Heart Failure Study has started patient recruitment. One hundred and forty subjects are planned to be randomized 1 : 1 at either device therapy or medical therapy alone at up to 30 sites in Europe and Canada. Main inclusion criteria are LVEF ≤ 35%, HF NYHA Class III, optimal stable HF therapy for at least 4 weeks, and patient's age between 18 and 80. Primary endpoint is to determine whether BAT with the Neo system produces a change in LVEF from screening through 6 months for subjects treated with BAT therapy relative to standard of care. Secondary endpoints are 6 month changes in the following measures: 6 min hall walk test, NYHA classification, quality of life, using the Minnesota Living With Heart Failure Questionnaire, NT-proBNP, creatinine levels, central pressure and haemodynamic parameters, electrocardiographic parameters and indices of rhythm status derived from 24 h Holter recordings, and additional echocardiographic parameters. The primary safety objective is to describe safety by estimating the rate of all system-and procedure-related complications. The primary exclusion criteria are significant stenosis or former surgery of carotid arteries, severe chronic obstructive lung disease, terminal renal failure, and acute cardiac decompensation. Interestingly patients with the combination of narrow QRS complex and permanent AF can be included in this study-a patient group that cannot be considered for CRT or CCM therapy. Cervical vagal nerve stimulation Background The efferent cardiac parasympathetic signalling chain comprises pre-ganglionic parasympathetic neurons originating in the brainstem, which course inside the vagal nerve and connect via nicotinergic acetylcholine receptors to post-ganglionic neurons, which are aggregated in circumscript cardiac ganglionic plexus inside the heart. Postganglionic fibres then innervate the cardiac target cells via muscarinergic acetylcholine receptors. tion. The device is implanted via the right side and is connected to a patch electrode fixed on the right-sided carotid sinus. (B) Shows an example of a carotid sinus nerve stimulator. The device carries one electrode connected to the patch electrode. (C) Demonstrates a patch electrode which will be fixed to the carotid sinus nerve. New devices in heart failure: an European Heart Rhythm Association report During CHF, the density of cardiac muscarinergic receptors is increased, 118 which is most probably due to an adaptive upregulation secondary to a decreased efferent vagal input. However, the postganglionic vagal nerve transmission seems to be intact in HF: selective electrical stimulation of post-ganglionic vagal nerve fibres to the sinus node leads to a larger decrease of the sinus rate in HF dogs than in controls, which would be in line with an increased number of muscarinergic receptors in HF. 119 By contrast, electrical stimulation of presynaptic cervical vagal fibres led to a smaller decrease of the sinus rate in CHF animals as compared with control animals. 119 Thus, pre-to post-ganglionic parasympathetic efferent neurotransmission via nicotinergic acetylcholine receptors seems to be impaired during CHF. 119 Importantly, these nicotinergic receptors are agonist dependent and chronic exposure to a nicotinic agonist during HF has been shown to reestablish efferent parasympathetic neural control of the sinus node. 120 These experiments form a pathophysiological rationale to apply electrical pre-ganglionic cervical vagal nerve stimulation to reestablish the diminished cardiac vagal tone in CHF. Mechanisms of action Several studies in various chronic HF animal models have shown a reduction of the progression of HF and a survival benefit with cervical vagal nerve stimulation. 121 -125 Major contributing mechanisms are detailed as follows: (1) Antiarrhythmic effects: Vagal nerve stimulation increases the ventricular refractory period in humans, 126 leads to a prolongation of the epicardial action potential duration, 127 and decreases ventricular vulnerability to ventricular fibrillation. 128 These electrophysiological effects may contribute to its potent antifibrillatory effects as demonstrated during cervical vagal nerve stimulation experiments in post-infarction animal models. 121 -123 In addition, vagal nerve stimulation in HF reduces the loss of Cx43, which may prevent proarrhythmic conduction delays and dispersion of action potential duration. 124 (2) Rate slowing effects: Vagal nerve stimulation exerts profound negative chronotropic and dromotropic effects. Since an increased heart rate is associated with adverse prognosis in CHF, 129 a reduction of heart rate both in SR and AF might contribute beneficial therapeutic effects in CHF. (3) Antifibrotic effects: In a coronary microembolization-induced HF model, chronic cervical vagal nerve stimulation has been shown to decrease ventricular replacement fibrosis and to blunt the development of CHF-associated cellular hypertrophy of remaining myocytes. 124 (4) Anti-inflammatory effects: Vagal nerve stimulation has potent antiinflammatory effects. 130 Recently, vagal nerve stimulation was shown to blunt HF-associated increases of tumour necrosis factor-a, interleukin-6, and C-reactive protein in two animal models of HF. 124,125 (5) Reverse remodelling: Vagal nerve stimulation decreases ventricular end-systolic 124,125 and end-diastolic diameters and improves LVEF. 124,125 Chronic vagal stimulation has also been shown to reduce NT-proBNP levels in a dog HF model 124 and biventricular weight in a rat HF model. 122 Technology and implant technique Implanting the stimulation lead in the neck is the unique component of this treatment, and important issues concerned are discussed below, assuming that placing a RV sensing lead and the implantable stimulator are done routinely by many cardiologists who are trained in implanting pacemakers ( Figure 6): (1) A surgeon with knowledge and understanding of the neck anatomy should be appointed for this procedure. Often, neurosurgeons, ENT, or head and neck surgeons; or vascular or cardiothoracic surgeons who perform carotid endarterectomy will be good candidates. (2) The surgical approach should favour exposure of the vagus nerve rather than the carotid artery, upon dissection of the carotid sheath. (3) Manipulation of the nerve should be minimized, with working posterior and around the nerve favoured over its lifting when placing the stimulation lead cuff. (4) Attention should be given to proper selection of the cuff electrode size, as its fitting on the nerve is important to the vagus functionality: too tight a cuff might damage the nerve, while too loose one does not allow stimulation currents concentration in the nerve, creating side effects and decreasing treatment effectiveness. Training in proper use of the tools available for determining appropriate cuff size; including, the nerve diameter gauge and impedance measurements and allowed ranges should be performed. (5) Fixation of the stimulation lead should be performed via suture sleeves to stable fascia and not to muscles with wider range of motion. Care should be taken to leave sufficient slack as strain relief. Figure 6 shows an example of a patient with an implantable vagal nerve stimulator. Clinical data The first multicentre, open-label phase II, two-staged study (8-patient feasibility phase plus 24-patient safety and tolerability phase) enroled 32 NYHA Class II -IV patients (age 56 + 11 years, LVEF 23 + 8%). Right cervical vagal nerve stimulation (VNS) with an implantable system started 2-4 weeks after implant, slowly raising intensity; patients were followed 3 and 6 months thereafter with optional 1-year follow-up. Overall, 26 serious AEs occurred in 13 of 32 patients (40.6%), including three deaths and two clearly device-related AEs (post-operative pulmonary oedema, need of surgical revision). Expected non-serious device-related AEs (cough, dysphonia, and stimulation-related pain) occurred early but were reduced and disappeared after stimulation intensity adjustment. There were significant improvements (P ¼ 0.001) in NYHA class QoL, 6 min walk test (from 411 + 76 to 471 + 111 m), LVEF (from 22 + 7 to 29 + 8%), and LV systolic volumes (P ¼ 0.02). These improvements were maintained at 1 year. 131 Several scientific articles have summarized the use of cervical vagal nerve stimulator (CVS) in patients with CHF. 131 -133 Naturally, the data provided in those articles are limited, due to the small patient cohort and the non-randomized nature of first in man and safety and feasibility studies; however important considerations can be drawn from them: (1) Patients with mild-to-moderate HF can tolerate the higher current amplitudes and other parameters used in chronic vagal stimulation vs. the parameters used for treatment of epilepsy. (2) Vagal stimulation may generate side effects like hoarseness, voice alteration, and increased cough. The majority of side effects are related to the underlying disease or a multitude of co-morbidities. (3) In patients with both CVS and ICD, no interferences from vagal stimulation were recognized by the ICD nor were detectable on the intracardiac electrogram of the device. (4) CVS increased parasympathetic tone of subjects with HF (as measured by heart rate variability on 24 h Holter recordings; and as inferred by reduction of resting heart rate). (5) Vagal stimulation improved subjective, as well as objective clinically relevant parameters of functional class, quality of life, submaximal exercise capacity, and cardiac structure and function in a first clinical study. Unresolved issues Atrial proarrhythmia At the atrial level, an increased vagal tone substantially shortens the atrial refractory period and increases the heterogeneity of refractory periods in the atria. 134 Whether this might promote the occurrence of AF especially in HF patients with a diseased atrial substrate is unknown. Experimental evidence in healthy dogs, however, suggests that the occurrence of AF during cervical vagal nerve stimulation depends on the intensity of vagal stimulation with no AF occurrence at lower level stimulation. 135 Data on the occurrence of AF during vagal stimulation in HF patients need to be gathered prospectively. Selectivity of neural stimulation The cervical vagal nerve contains efferent and afferent fibres coursing not only to the thoracic but also to or from the abdominal viscera. 136 So far, the issue of concomitant stimulation of efferent parasympathetic fibres to the gastrointestinal tract has not yet been thoroughly investigated. Such inadvertent stimulation might increase gastric acid secretion or increase intestinal motility. Likewise, afferent electrical stimulation or block of the vagal nerve below the diaphragm has been shown to decrease food intake and to reduce weight gain. 137,138 Since these fibres potentially course through the cervical vagal nerve inadvertent stimulation or block of these fibres may occur during vagal nerve stimulation for HF possibly inducing weight loss masking as cardiac cachexia. Clinical perspectives CVS has not yet undergone randomized control studies. The currently performed increase of vagal tone in CHF (INOVATE-HF; NCT 01303718) prospective randomized controlled trial should provide data to determine whether patients with HF will be able to benefit from this new treatment method. Due to the inherent issues of blinding of patients and healthcare providers to the activity of a nerve stimulator, an open-label study design has been elected and authorized by regulatory authorities in the USA and Europe (Germany and Serbia). The purpose of the INOVATE-HF study is to demonstrate the long-term safety and efficacy of vagus nerve stimulation with the CardioFitTM system for the treatment of subjects with HF. Up to 650 patients with LV systolic dysfunction (EF , 40%) and HF NYHA Class III despite OMT are included in 60 centres. Patients are randomized 3 : 2 in active vs. control and followed for at least 12 months. The planned study duration is about 5.5 years. The two co-primary safety endpoints are freedom from procedure-and system-related events through 90 days post-implant .75% and time to first event in all-cause mortality and complications resulting in prolonged hospitalization. The primary efficacy endpoint of the study is a composite of all-cause mortality and unplanned HF hospitalization equivalent using a time to first event analysis, when a pre-specified number of events have been accumulated in both arms. That design enables more thorough examination of safety issues related to the implant and activation of this new treatment modality, a necessity in the current regulatory environment. In addition to safety, that design enables more accurate comparison of the effect of chronic vagal stimulation on clinical outcome parameter, that include mortality and HF hospitalizations. Given the substantial theoretical background 64,133 coupled with the beneficial effect of increased vagal tone on patients longevity 139 in patients with HF and in patients post-MI, 36 it is conceivable that in patients whose baseline vagal tone is low and in whom the parasympathetic tone is increased via VNS, clinical benefit may be observed. 140,141 Intracardiac atrioventricular nodal vagal stimulation Background Thirty to forty percent of patients with CHF eventually will develop AF. Rapid ventricular rates during AF may further deteriorate HF or decrease the degree of LV resynchronization in patients with cardiac resynchronization devices and may ultimately lead to inappropriate shock delivery in up to 5% of ICD recipients. 142, 143 Long-term selective atrioventricular (AV) nodal vagal stimulation for ventricular rate control has been developed as potential adjunctive treatment modality for these patients. Mechanisms and experimental models Post-ganglionic cardiac vagal fibres, which preferentially supply the AV node reside in an inferior right ganglionated plexus (IRGP) at the postero-inferior interatrial septum. 144 This makes them amenable to stimulation from the endocardial surface, thus resulting in a graded response negative dromotropic effect. 145 Chronic stimulation of the IRGP in animal models provides reliable and well-tolerated ventricular rate control in dogs 146,147 and has been shown to be haemodynamically superior to His bundle ablation and RV pacing probably because ventricular conduction over the His-Purkinje system is maintained. 148 Technology and implant technique The IRGP is located at the epicardial surface of the heart between the ostium of the coronary sinus and the entrance of the inferior vena cava. For chronic electrostimulation of this plexus, atrial pacing leads of current technology can be screwed into the IRGP from the right atrial endocardial site at the postero-inferior interatrial septum either by using specifically shaped guiding catheters or manually shaped conventional mandrins. Clinical data Recently, the feasibility of such an approach was shown in a series of chronic human implants in HF patients with AF. 149 A prospective multicentre study (AVNS: AV node stimulation study, NCT01095952), which investigates whether short-term probatory AV nodal vagal stimulation may avoid inappropriate shock delivery in patients with resynchronization defibrillators is ongoing. Electrical determinants of neurostimulation Electrical stimulation of parasympathetic pre-or post-ganglionic efferent fibres obeys the same fundamental laws as myocardial electrostimulation with the exception that the chronaxie time of 180 ms is shorter. 150,151 In contrast to the myocardial action potential, the action potential of neurons is very short lasting only 10 -20 ms. 152 Thus, transmitter release and physiological effects may be elicited by electrical stimulation at higher frequencies, which typically is set to 20 -50 Hz during cervical vagal stimulation for seizures or depression, 150 100 Hz for carotid sinus stimulation, 153 or 4-20 Hz for cervical vagal stimulation in HF. 154 For post-ganglionic efferent parasympathetic stimulation the frequency-response curve is bellshaped with an optimum at 40 Hz. 151,155 Since most stimulated nerve structures contain afferent and efferent autonomic fibres with varying fibre diameter and conduction velocities, 150 efforts have been undertaken to preferentially stimulate efferent fibres. For example, cervical vagal nerve stimulation for treatment of HF aims to stimulate efferent parasympathetic B fibres coursing towards the heart but tries to avoid afferent A and C fibre excitation. This can be achieved by hyperpolarizing the nerve fibres at the anode thus preventing excitation and simultaneously depolarizing fibres at the cathode. Since larger afferent A fibres (diameter 5-20 mm) are more sensitive to hyperpolarization than smaller efferent B fibres (1-3 mm) inside the vagal nerve preferential efferent stimulation can be accomplished. 156,157 Besides electrode configuration, the applied stimulus strength also affects differential recruitment of A -C fibres and changes the frequency dependence of afferent neural stimulation. 106 Finally, the application of biphasic impulses may bare the benefit of decharging the membrane with the second phase of the impulse thus preventing damage to the neurocytes. 156 Implantable haemodynamic monitoring devices Clinical management to prevent acute decompensated heart failure (ADHF) and/or hospitalization in ambulatory HF patients remains challenging. There is an urgent need to develop strategies to reduce hospitalizations and re-admission rates for HF. Frequent monitoring of physiological data is imperative in the management of HF. The development of wireless and remote technology makes it possible to frequently monitor and transfer data via telemonitoring. The concept of telemonitoring involves patient-activated automatic devices which provide physiological parameters such as weight, blood pressure, heart rate, rhythm, and activity logs. Results of studies using telemonitoring have been contradictory. A meta-analysis demonstrated that telemonitoring may provide better outcomes compared with usual care, with a reduction in mortality and HF hospitalizations. 158 Recently, the value of telemonitoring has been challenged by two randomized clinical trials. The results of the Telemedical Interventional Monitoring in Heart Failure (TIM-HF) trial showed no significant difference in all-cause mortality (primary endpoint) or in the composite of cardiovascular death or HF hospitalization. 159 The study of Chaudhry et al. 160 demonstrated that telemonitoring failed to reduce HF hospital admissions, duration of hospital stay, or the frequency of admissions. One explanation might be the insensitivity of daily weight monitoring to predict HF hospitalization, which is 20%. 161 Another strategy is the use of cardiac implantable devices (defibrillators and CRT) to stratify the risk of ADHF based on a single parameter as thoracic impedance 162,163 or heart rate variability, 140 or a combination of parameters. 164,165 These parameters, single or combined, have a sensitivity of 60-70%, a positive predictive value up to 7.8%, and false-positive alarms ranging from 1.8 to 2.7 per patientyear of monitoring. Although these parameters enable physicians to identify patients at increased risk of ADHF, they do not impact patient outcomes and are not sufficiently accurate to adjust treatment. A new approach to monitor the status of ambulatory HF patients and preventing potential hospitalizations may involve implantable devices providing real-time haemodynamic data to the clinician. Device companies began to develop new implantable devices designed to collect haemodynamic data. These investigational devices include RV, left atrial pressure (LAP), and pulmonary artery pressure (PAP) sensors. Right ventricular pressure monitoring The device, implantation, and monitored data The Chronicle (model 9520, Medtronic Inc.) is an implantable haemodynamic monitor. The system consists of a specialized transvenous lead that has a sensor incorporated near the tip to measure intracardiac pressure and a programmable device similar in size and shape to a pacemaker. Details of the components have been previously described. 166 The device is able to monitor and telemeter systolic and diastolic RV pressure, RV dP/dt (positive and negative), to estimate pulmonary artery diastolic pressure, to monitor heart rate and patient's activity, and to measure core body temperature. In addition, continuous remote monitoring of data is available. The implantation procedure is similar to that of a single-lead pacemaker. The device is positioned subcutaneously in the pectoral area and the lead is placed transvenously in the RV outflow tract or septum. The patient is furnished with a small external device, which aids in correcting for barometric pressure. The monitor capabilities includes pressure sensing circuitry and a memory to store continuous pressure trends, as well as specific triggered events such as bradyarrhythmias, tachyarrhythmias, or patient-activated episodes. The device continuously measures and stores RV systolic, diastolic, and pulse pressure, estimated pulmonary arterial diastolic pressure (ePAD), RV dP/dt, pre-ejection interval, and systolic time interval. The ePAD is defined as the RV pressure at the time of pulmonary valve opening or maximal RV dP/dt and has been shown to correlate with PA diastolic pressures (r ¼ 0.87 at baseline and 1 year), thus reflecting LV filling pressure. 167,168 In addition, heart rate, patient activity levels, and central venous temperature are also monitored and stored. Clinical data The early clinical experience with this implantable haemodynamic monitor was obtained on 32 HF patients followed for 17 months. 169 In this non-controlled study monitoring of RV pressures at long term showed either marked variability or minimal timerelated changes. However, during clinical events due to volume-overload events, RV systolic pressures increased on average by 25%, occurring around 4 + 2 days before the HF exacerbations requiring hospitalization. In this patient cohort use of haemodynamic data lead to a significant reduction of hospitalizations in comparison with previous patients' history and this was the basis for planning the COMPASS-HF study (Chronicle Offers Management to Patients with Advanced Signs and Symptoms of Heart Failure). 170 The COMPASS-HF study was a multicentre, single-blinded trial in which all 274 participants received the implantable haemodynamic monitoring, but were randomized to either an intervention arm in which the device diagnostic data were used for patient management on the basis of a set of recommendations (n ¼ 134) or to a control (n ¼ 140) group (no access to device diagnostics data for the first 6 months of the study). The enrolment criteria for this trial included HF with NYHA functional class III or IV, regardless of LVEF with optimized standard medical therapy for at least 3 months before enrolment. During the study three study visits over 6 months were planned. A decrease in HF-associated events was the primary endpoint and lack of system-related complications and pressure-sensor failures were the safety endpoints. Although healthcare providers were not blinded to treatment assignment, blinding of patients was maintained using both written communications and telephonic contacts. The study results showed that the two safety endpoints were met (system-related complications in only 8% of cases), but the primary efficacy endpoint was not met because the Chronicle group had a non-significant 21% lower rate of all HF-related events compared with the control group (P ¼ 0.33). This finding is partly explained by a lower than expected event rate. A retrospective analysis showed a 36% reduction (P ¼ 0.03) in the relative risk of first HF-related hospitalization in the chronicle group with no difference with regard to LVEF (≥ or ,50%). Clinical perspectives In March 2007, FDA's Circulatory System Devices Panel voted against the approval of Chronicle implantable haemodynamic monitor because its use was not proved to significantly improve clinical outcomes in the COMPASS-HF randomized, controlled trial. The following step has been the planning of a trial designed to test the potential usefulness of combining the haemodynamic monitoring capabilities of an implantable monitoring device with an ICD in patients at risk of sudden cardiac death. Indeed, a new trial, REDU-CEhf, was designed to enrol 850 patients (then increased to 1300) with indication for an ICD (NYHA functional class II or III with reduced LVEF) to test the hypothesis that the use of RV pressureguided patient management would reduce HF-related events (hospitalization and emergency department or urgent clinic visit requiring parenteral therapy). 171 Because of technical complications, REDUCEhf was prematurely ended after enrolment of 400 patients. Data analysis showed no benefit from haemodynamic monitoring and a lower than expected event rate. 162,170,172 Demonstrating the efficacy of HF disease management programmes using RV pressure monitoring was particularly difficult and many issues appear to condition the ability to demonstrate a substantial clinical benefit. These factors include the degree of HF severity in the tested population, the quality of comparative usual care (with a potential low-external validity if the trial is managed by highly specialized centres), the choice of the primary endpoint (the need for medical visits may actually increase during continuous monitoring in view of earlier detection of worsening HF, but this may imply avoidance of subsequent hospitalizations), as well as the specific characteristics of the disease management programme adopted. 172 Moreover, it is possible that a substantial improvement in patient care will require to couple the information provided by an implantable haemodynamic sensor to an effect or capable of promptly instituting an appropriate therapy, thus 'closing the loop' and limiting the need for interventions of healthcare providers. 173 Left atrial pressure monitoring Patients admitted for decompensated HF usually have elevated LAP causing pulmonary congestion and oedema. 174 The rise in LAP is usually gradual and precedes the onset of symptoms. Therefore, monitoring of LAP has the potential to forecast and abort HF decompensation by adjusting drug therapy. The device The HeartPOD (St Jude Medical) comprises an implantable sensor lead that measures pressure, intracardiac electrograms, and temperature, coupled to a coil antenna positioned in the subcutaneous tissue ( Figure 7). Folding proximal and distal nitinol anchors fix the sensor lead onto the interatrial septum. The device is either standalone or coupled to a CRT-D unit. A handheld patient advisory module (PAM) powers the implanted device (which has no battery) by radiofrequency wireless transmission. The PAM also measures the atmospheric pressure, that is then subtracted from the pressure measured in the implant to obtain LAP. During interrogation, physiological waveforms of LAP and intracardiac electrograms are captured in the memory of the PAM for periods of up to 20 s. The PAM has the capacity to store 3 months of data with six daily interrogations. Implantation The device is implanted by cardiac catheterization and transseptal puncture, and requires special training. Access may either be (i) entirely femoral, with implantation of the sensor module in a lower right abdominal pocket, (ii) entirely superior with transseptal puncture performed by a deflectable sheath and a special puncture screw, or (iii) combined, with the transseptal puncture performed by femoral access and lead transfer to a superior access using a snare. The device was implanted successfully in 82of 84 (98%) subjects included in the HOMEOSTASIS study, 175 without any reported major periprocedural cardiac or neurological AEs. 175 -177 Postimplantation, patients are maintained on daily aspirin, as well as clopidogrel for ≥6 months. The bulk of foreign body material in the left atrium is less compared with septal defect closure devices, and post-mortem examination shows that fibrous tissue subsequently covers most of the pressure sensor (Figure 8), thus limiting thromboembolic risk. Clinical data The first human experience with the HeartPOD is reported in the HOMEOSTASIS study, which is a prospective, multicentre, observational open-label registry in patients with NYHA III/IV HF, regardless of LVEF. 175 -177 Long-term performance of the device is reported in 82 patients. 175 Correlation between simultaneous measurement of LAP and pulmonary artery wedge pressure was high at 3 months (r ¼ 0.97) and at 12 months (r ¼ 0.99). 175 Freedom from device failure was 95% (n ¼ 37) at 2 years and 88% (n ¼ 12) at 4 years. 175 Causes of device failure were related to either measurement issues (artefacts or drift) or to communication malfunction. The clinical impact of the HeartPOD on patient outcome was reported in 40 patients. 177 After a 3-month observation period, patients entered a 3-month titration period where vasodilator and diuretic drug therapy was adjusted by a physician based upon the LAP readings. A 'stability' period was then followed during ≥6 months, whereby drug therapy, sodium and fluid intake, activity level, or physician contact was adjusted based upon five ranges of LAP readings that had been individually adjusted by the physician during the titration period. Compared with the observation period, the annual rate in death or HF events was significantly decreased during the titration and stability periods (0.68 vs. 0.28 per year; P ¼ 0.041). Also, there were improvements in NYHA class (20.7 + 0.8, P , 0.001) and LVEF (7 + 10%, P , 0.001). Even though these results are encouraging, it should be stressed that the study was not randomized. The ongoing LAPTOP-HF study is a randomized clinical trial recruiting up to 730 patients with maximally treated NYHA III HF (irrespective of LVEF) randomized 1 : 1 to receive a HeartPOD vs. a PAM only (for reminding drug therapy), that aims to show a reduction in worsening HF and hospitalizations. Successful extraction of the HeartPOD has been reported, although details are not provided. 175 A locking stylet may be used, if necessary with an extraction sheath. The presence of an atrial septal defect after extraction is a possible issue, although the fibrotic membrane covering the HeartPOD may limit this complication. Clinical perspectives Self-titration of HF therapy by patients based upon LAP readings (similar to diabetics adjusting insulin levels with glycaemia) is a paradigm shift in HF management. This strategy of course requires patient participation and compliance, and may not be suitable for many individuals. Patients with diastolic HF are a therapeutic challenge, in whom volume overload may rapidly cause pulmonary congestion. Analysis of the LAPTOP-HF data in this subset of patient will clarify whether titration of diuretics based upon LAP readings improves outcome. LAP data may be of use to optimize AV delays in CRT-HeartPOD devices (as the left atrial A-wave is recorded), but has not as yet been tested. In conclusion, preliminary data relating to the HeartPOD show good mid-term device function and hold promise for improving patient self-management in HF. These data however need to be confirmed by the ongoing randomized LAPTOP-HF trial. Device implantation requires special training, and techniques are still evolving. The presence of a lead allows extraction (contrary to PAP monitor), but further experience is required to better evaluate procedural complications. Pulmonary artery pressure monitoring The device The CardioMEMS heart failure sensor (CardioMEMSInc.) consists of a coil and a pressure-sensitive capacitor housed in a hermetically sealed silica capsule covered in medical-grade silicone. Two wired nitinol loops at the ends of the capsule serve as anchors to prevent distal migration of the sensor (Figure 9). The coil and capacitor form an electrical circuit that resonates at a specific frequency. The coil allows for electromagnetic coupling to the sensor by an external Figure 9 The CardioMEMS sensor consists of a coil and a pressure-sensitive capacitor housed in a hermetically sealed silica capsule covered in medical-grade silicone. Two wired nitinol loops at the ends of the capsule serve as anchors to prevent distal migration of the sensor. New devices in heart failure: an European Heart Rhythm Association report antenna, which is held against the patient's body. The antenna powers the implanted device, continuously measures its resonant frequency, which is then converted to a pressure waveform. The interrogating device measures the atmospheric pressure, which is subtracted from the pressure measured by the implanted sensor. The Cardio-MEMS sensor is designed to measure PAP. Sensor delivery system and implantation The CardioMEMS sensor is supplied pre-loaded and attached to a tether wire at the end of the delivery catheter. The sensor is implanted by using a venous femoral approach. First, a Swan-Ganz catheter is advanced into the deployment site in the pulmonary artery. After identification of the target artery, a guidewire (0.018-0.025 inch) is placed through the Swan-Ganz catheter to allow the sensor delivery catheter system to be advanced. The delivery catheter is advanced over the wire and released in the target vessel. After removal of the delivery catheter, the Swan-Ganz catheter is replaced into the pulmonary artery proximal to the sensor. Subsequently, the implanted sensor is calibrated using PAPs acquired from the Swan-Ganz catheter. Experimental data Animal studies with follow-up up to 6 months showed good correlation between sensor-based measurements and invasive pressure evaluations, and no propensity for in situ thrombus formation (K. Robinson et al. 2005, unpublished results). The first human implant of a wireless pressure sensor for monitoring PAP was performed in 2007. 178 At 60 days post-implantation, no complications were observed and there was no evidence of pulmonary thrombosis. Subsequently, an observational cohort study was performed in 12 ambulatory HF patients to evaluate the accuracy of sensor-based determinations compared with measurements obtained by Swan-Ganz catheterization and echocardiography. 179 The results of this study demonstrated high correlation between simultaneous sensorbased PAP measurements and Swan-Ganz catheterization at baseline (r 2 ¼ 0.90) and at 60 days follow-up (r 2 ¼ 0.94). The correlation between sensor and echocardiography was good (r 2 ¼ 0.75). Systolic pressure obtained by the sensor tended to be higher than those measured by Swan-Ganz catheterization, which can be explained by differences in measurement method and sampling rate. Clinical data Recently, the results CardioMEMS HF Sensor Allows Monitoring of Pressures to Improve Outcomes in NYHA Functional Class III Heart Failure Patients (CHAMPION) prospective, multicentre, randomized, single-blind clinical trial were published. 180 All patients (n ¼ 550) received the device implantation but physicians were blinded to daily PAP measurements in the control group (n ¼ 280). The goal was to evaluate the efficacy and safety of the PAP sensor. The study met the safety endpoints; freedom from device-or systemrelated complications was 98.6% and overall freedom from pressuresensor failures was 100%. The primary efficacy endpoint was the rate of HF hospitalizations over 6 months. The use of daily PAP measurements reduced the rate of HF hospitalizations by 30% (hazard ratio 0.70, 95% confidence interval, 0.60-0.84; P , 0.001). Over the entire randomization period, the rate of HF hospitalizations was reduced by 39% in the treatment group. When admitted for HF, patients in the treatment group had significantly shorter length of stay compared with those in the control group (2.2 + 6.8 vs. 3.8 + 11.1 days, P ¼ 0.02). The CHAMPION study was the first study where the use of diagnostic data significantly reduced HF events. Two trial design issues are worth noting, which distinguish the CHAMPION trial from other device diagnostic studies. The issues include, (i) target population risk, and (ii) intensity of intervention. The target population in CHAMPION focused on HF patients with sustained NYHA Class III symptoms for at least 90 days prior to enrolment. Low-risk NYHA Class II and end-stage NYHA IV patients were excluded from the study. In addition, patients with chronic kidney disease (estimated glomerular filtration rate , 25 mL/min/ 1.73 m 2 ) were excluded due to non-response to changes in medications in the outpatient setting. The major difference between the CHAMPION trial and other device diagnostic studies can be attributed to the intensity of intervention. The CHAMPION trial had specific pressure targets that providers were supposed to achieve using neurohormonal, diuretic, and/or vasodilator therapy. A significantly higher number of medication changes were observed in the treatment group compared with the control group (9.1 + 7.4 vs. 3.8 + 4.5 per patient, P , 0.001). Clinical perspectives In November 2011, FDA's Circulatory System Devices Panel voted against the approval of the CardioMEMS implantable haemodynamic monitor because its use was not proved to significantly improve clinical outcomes in the randomized, single-blind CHAMPION trial. The greatest concern is the trial's design, which made it impossible to distinguish any treatment effect from the device itself in the single-blind trial. Physicians knew which patients had the device and specific treatment recommendations were made in the treatment group and not in the control group. Ambulatory HF patients are at high risk for hospital admission. There is an urgent need to develop strategies to reduce hospitalizations and re-admission rates for HF. New technologies for invasive haemodynamic monitoring have been developed as an adjunct to clinical follow-up. The results of studies with implantable haemodynamic monitoring devices show great promise as adjunctive tools in the management of HF patients. The technology of haemodynamic monitoring is not incorporated in current defibrillators or devices with CRT. Future studies have to evaluate whether a role exists for combining impedance measurements by current cardiac devices with haemodynamic sensors.
2018-04-03T03:15:38.604Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "60e4dc6cfc6476ccc9ed37f4e3b55feac80cb61b", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/europace/article-pdf/16/1/109/7536723/eut311.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "de6ed47a9895ce975ace8a74b0c92976a2aaad20", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204370698
pes2o/s2orc
v3-fos-license
Evaluation of A Mobile Learning System to Support Correct Medication Use for Health Promotion Recently, the applications of mobile technology integration into innovative pedagogical approaches have resulted in new opportunities and challenges. The rapid development of mobile technology encourages student participation, knowledge sharing, collaboration, and the social interaction necessary to form authentic learning communities around student-generated content anytime and anywhere. Most health education interventions remain focused on traditional lecture methods and neglect the importance of student-centered learning approaches, few studies have examined the effectiveness of mobile technology on health promotion. Therefore, this paper designed a mobile learning system for learning correct medication use and investigated students’ continuance usage intention. The findings revealed that students’ satisfaction with mobile learning was the major factor predicting continuance intention. learning has thus become increasingly more popular.A growing number of researchers have identified multiple educational settings for mobile learning, including science (Liu et al., 2009), mathematics (Song and Kim, 2015), and history (King et al., 2014) focused on the impacts of on student achievement.Sung et al. (2016) concluded that learners using a mobile technology perform significantly better in terms of achievement than those who are not using mobile technology. Therefore, we would like to use the convenience of mobile learning to expand the effect of correct medication usage. Researchers have focused on predicting and the adoption and satisfaction of mobile learning based on technology acceptance model (TAM) (Davis, 1989).Hyman, Moser and Segala (2014) extended TAM and added learnability to evaluate its effects on behavioral intention and usage.The findings suggested learnability did not show the significance on behavioral intention and usage.Alrasheedi, Capretz and Raza (2015) examined 14 factors which might impact mobile learning implementation. The results showed that perceived productivity was the primary factor to successfully apply mobile learning.Pindeh, Suki and Suki (2016) proposed a research framework which incorporated TAM model on perceived playfulness to predict uses' behavioral intention in mobile learning.Although, researchers had paid much attention on acceptance and perception, while the long-term usage was crucial to the success of mobile learning.However, little research has been done on the continuance use intention of mobile learning, especially in the field of health education.This study aimed to investigate the factors that affect students' continuance intention to use mobile learning in health education.Therefore, we developed a mobile learning system related to correct medication usage and investigated the learning outcome and the continuance intention for mobile learning.Prior studies suggested that expectationconfirmation model (ECM) (Bhattacherjee, 2001) has successfully explained and predicted the users' intentions to continue using e-learning (Lee, 2010).In addition, perceived enjoyment was a stronger predictor for users' intention in mobile learning (Pindeh et al., 2010).In this study, we incorporated the ECM and perceived enjoyment to investigate students' continuance intention to use mobile learning related to correct medication usage. Correct Medication Usage According to Chi et al.'s (2012) study on correct medication usage, there are five core abilities related to correct medication usage, including the ability to clearly express personal conditions to one's physician, the ability to check information on medication packages, the ability to correctly take medications as prescribed, the ability to be the one's own master when taking medications, and the ability to be friends with pharmacists and physicians.In order to promote essential information of correct medication usage to enhance health literacy and improve health outcomes, Taiwan Food and Drug Administration, the Taiwan Ministry of Education initiated the Health-promoting School Program (HPS) in 2009 in Taiwan.Chi et al. (2014) evaluated the effects of the HSP program and the results indicated that implementation of correct medication usage had significantly enhanced students' health knowledge of correct medication usage. With the rapid development of information and communication technologies, it enables knowledge to be processed and transmitted more quickly than ever.The rise of information technology is providing significant support for spreading health literacy and related knowledge.Shiue and Hsu (2017) investigated the effectiveness of a digital game related to correct medication usage.The results showed that game-based learning significantly enhanced students' knowledge of correct medication usage.The convenience of mobile technologies offering students the opportunities to engage in asynchronous and ubiquitous learning environments.As mentioned above, little research has been focused on the continuance usage intention of mobile learning to enhance health literacy.In order to fill this gap, we develop a mobile learning system (Figure 1) to investigate the outcomes of mobile learning. Source: Developed for this study The expectation-confirmation model (Bhattacherjee, 2001) is based on integrating the TAM (Davis, 1989) with the expectation-confirmation theory (Oliver, 1980) to understand users' intention to continue using information systems (IS).The essence of the expectation-confirmation model (ECM) is that actual consumption experience will be compared with original expectations and adjusted while they are experienced with the products and the resulting confirmation will lead to satisfaction.When International Journal of Management, Economics and Social Sciences 245 using the IS, their continued usage of the IS can be interpreted as reusing behavior. Confirmation Confirmation is referred to as the realization of the expected benefits of IS use (Oghuma et al., 2016). According to the cognitive dissonance theory (Bhattacherjee, 2001;Festinger, 1957), users may experience cognitive dissonance if usefulness is disconfirmed during actual use.Confirmation will increase perceptions of usefulness while disconfirmation will reduce such perceptions.To reduce dissonance, students using mobile technology for learning health information may try to adjust their perceptions of usefulness.Hence, the following hypothesis is stated: H1: Confirmation has a positive effect on perceived usefulness of mobile learning. Confirmation is based on a rational process of comparing initial expectations with actual experience.Based on the ECM, if users perceive a higher level of concurrence with their post-adoption expectations, they will tend to have a higher level of satisfaction and continuance intention (Bhattacherjee, 2001).Research in social media has suggested that users who are satisfied with their blog experiences are more likely to enjoy using blogs (Shiau and Lou, 2013).Since mobile learning is more enjoyable comparing to traditional learning, students may feel playful in mobile learning environments.Therefore, the confirmation of student expectations will have a positive influence on satisfaction and enjoyment of mobile learning.Hence, the following hypotheses are proposed: H2: Confirmation has a positive effect on satisfaction with mobile learning. H3: Confirmation has a positive effect on perceived enjoyment of mobile learning. Perceived Usefulness Perceived usefulness, relates to the performance aspect of IS use, has been repeatedly proven to affect behavior and to be a direct determinant of continuance intention usage (Wu and Chen, 2017). Several studies have supported a positive association between perceived usefulness and IS continuance intention (Hung, Chang and Hwang, 2011;Bøe, Gulbrandsen and Søebø, 2015).In addition, social behavioral research suggested that perceived usefulness, communication quality, satisfaction, and perceived playfulness are critical determinants of a customer's attitude toward adoption (Hung et al., 2015).If students perceive mobile learning to be useful, an increased level of satisfaction is expected.The more usefulness users expect to gain from mobile learning, the more satisfied they will be, and the higher the likelihood will be that they will continue using this type of learning.Hence, the following hypotheses are proposed: H4: Perceived usefulness has a positive effect on satisfaction with mobile learning.H5: Perceived usefulness has a positive effect on continuance intention toward mobile learning. Perceived Enjoyment Enjoyment is referred to as the reason or belief formed by an individual's personal experience with the environment (Moon and Kim, 2001).Zhou and Lu (2011) explored the factors affecting mobile instant message user loyalty and concluded that enjoyment significantly influences users' satisfaction toward mobile instant messaging.Mäntymäki and Salo (2011) examined the role of enjoyment on continuance usage intention towards online shopping and find that continuance usage intention is strongly determined by perceived enjoyment.Pindeh et al. (2016) proposed users' acceptance of using mobile apps in learning language is influenced by the usefulness and ease of use, and assumed perceived playfulness would have a significant impact on intention to use mobile apps.Therefore, perceived enjoyment is an important determinant of continuance intention to use mobile devices to learn correct medication use.Hence, the following hypotheses are proposed: H6: Perceived enjoyment has a positive effect on satisfaction with mobile learning.H7: Perceived enjoyment has a positive effect on continuance intention toward mobile learning. Satisfaction Satisfaction is referred to as an affect and captured as a positive, indifferent, or negative feeling (Bhattacherjee, 2001).Satisfaction intention suggests that higher/lower user satisfaction means that it will be more or less likely that users will have intention to use a system.A review of the educational literature suggests that satisfaction occurs when individuals are confident that a clear understanding of learning is achieved, and their learning results meet or exceed their perceptions of expected outcomes (Hui et al., 2008;Johnson, Aragon and Shaik, 2000).Therefore, if students feel satisfied when they use mobile devices, they will be more likely to continue engaging in mobile learning in the future. Hence, the following hypothesis is proposed: H8: Satisfaction has a positive effect on continuance intention toward mobile learning. According to the above reasoning, the proposed conceptual framework is illustrated in Figure 2. The instruments were divided into two parts.The first part contained five questions related to the demographic information.The second part consisted of 18 items intended to measure the constructs of expectation (3 items), perceived usefulness (4 items), perceived enjoyment (4 items), satisfaction (4 items), and continuance intention (3 items).The participants rated each construct item in part two on a 5-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree).the instrument was revised and adapted, as shown in Table 2, where items from previous studies were modified to fit the mobile learning environment to measure expectation (Bhattacherjee, 2001), perceived usefulness (Davis, 1989), perceived enjoyment (Kim, 2010), satisfaction (Bhattacherjee, 2001), and continuance intention (Bhattacherjee, 2001). ANALYSIS AND RESULTS Partial least squares (PLS) technique was used to analyze the items, variables, the overall research model, and the hypothesized relationships.A factor analysis was used to ensure sufficient reliability and validity for the measurements, followed by the structural model analysis.PLS was employed since it seems to be appropriate for exploratory work and for prediction, as well as for analyzing complicated relations and models (Ringle, Sarstedt and Straub, 2012).Also, PLS has a minimal quest on measurement scale, and it does not require a specific distribution of the measured variables.The statistical software SmartPLS was used to analyze the data. -Measurement Model Factor loading and average variance extracted were used to test the convergent validity and reliability of each variable in this study.Table 1 shows that all the constructs exhibited internal consistency, exceeding the threshold value of 0.70.To satisfy the discriminant validity, the square root of the AVE should be greater than the inter-scale correction.Table 2 shows that the elements along the diagonal were much greater than the off-diagonal elements.In addition, all composite reliabilities (CRs) were > 0.80, which was greater than the critical value of 0.70.The analyses confirmed the convergence validity and reliability of the measurement model.Discriminant validity was also confirmed.Table 3 shows that seven out of the eight hypotheses were supported according to the PLS analysis.Confirmation had a significant impact on perceived usefulness, satisfaction, and perceived enjoyment, supporting H1, H2, and H3.Perceived usefulness affected satisfaction and continuance intention , supporting H4 and H5.Perceived enjoyment affected satisfaction, thus supporting H6. Variables However, perceived enjoyment was not found to have an influence on continuance continuance intent- CONCLUSION AND IMPLICATIONS Although ECM has been widely useful in studying IS usage, only little research examined its impact on the mobile learning environments.In our study, perceived usefulness and satisfaction could be influential factors for continuance intention to use mobile learning technology.The PLS analysis showed that seven out of eight hypotheses were supported with the significant variances explained, thus providing evidence that the model is effective for analyzing student decisions to continue using mobile learning technology.Our findings suggested that perceived usefulness and satisfaction are salient predictors of continued IS usage are similar to the results of other IS research studies (Bøe et al., 2015;Shiue and Hsu, 2017;Yeh and Teng, 2012). As shown in the ECM model, satisfaction with mobile learning was the major factor predicting continuance intention, followed by perceived usefulness.The only non-significant relationship found was between perceived enjoyment and continuance.A plausible explanation for this result is the degree to which students are satisfied with mobile learning in regard to their understanding of correct medication usage will affect the degree to which they are more likely to continue using mobile learning to promote their health literacy.These students didn't enjoy learning correct medication usage on their mobile devices.Therefore, the learning content must be improved to make it more enjoyable and motivating. Mobile devices provide various beneficial features for learning, such as real-time access to information, context sensitivity, instant communication, and feedback.These features may enhance the effects of certain pedagogies, such as self-directed learning, collaborative learning, or formative This study revealed the continuance intention to use mobile devices for learning correct medication usage.Further efforts are needed to continuously promote living a healthy life in this population.To the best of our knowledge, little research has been conducted to investigate health promotion effectiveness and continuance intention usage from a broad perspective to construct a theoretical framework for such a study. LIMITATIONS AND FUTURE DIRECTIONS This study showed that students were well informed about the correct use of medicines, but their use of medicines still needs improvements.It is suggested that in the future, teachers should strengthen their knowledge of the content and importance of proper medication usage when teaching in schools. Teachers might incorporate new technologies, such as mobile devices and tablets to increase students' learning motivation and enhance students' understanding to proper medication usage through meaningful activities to guide them in the proper use of medicines. Finally, the population of this study comprised students' who were utilizing mobile learning, for Figure 1 . Figure 1.Sample Correct Medication Use of this research consisted of 118 undergraduate students who took an Introduction to Health and Food course in Tainan City, Taiwan.All the students taking the course were informed about this research and were also informed that participation in the research is voluntary.All the data were collected anonymously.A total of 118 students voluntarily participated in the study, 52 of which were males (44%) and 66 of which were females (56%).All of the students had already used mobile devices either for communication, information, or for self-study.Source: Developed for this study Figure Figure 2. Research Model Figure 3 Figure3shows the results of the structural model.We analyzed the R 2 values to access the explanatory power of the structural model.In this study, we proposed an extended ECM model of students' continuance intention to use mobile learning within correct medication use contexts.The Table 3 . Results of Hypotheses Testing
2019-09-19T09:04:37.786Z
2019-09-23T00:00:00.000
{ "year": 2019, "sha1": "2318fabbd9e987aa764c183d0476c438dd5801f4", "oa_license": null, "oa_url": "https://doi.org/10.32327/ijmess/8.3.2019.15", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d752f738a0378985940ab0ab10d51f2a8e6a314c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251066928
pes2o/s2orc
v3-fos-license
Compton scattering for photon and gluon in fixed-target collisions at AFTER@LHC We calculate the Compton scattering for photon and gluon with the Klein-Nishina formula in fixed-target collisions by using the proton and lead beams at AFTER@LHC. In these collisions, we can investigate the particular case of Compton scattering at the partonic level, such as $\gamma q\rightarrow q\gamma$, $\gamma q\rightarrow qg$, $gq\rightarrow q\gamma$, and $gq\rightarrow qg$, that can help to check of the equivalent-photon approximation and understand the dynamics of hadron collisions at high energies, as well as probe the inner hadron structure. INTRODUCTION Fixed-target collisions permit the study of photon, lep-ton, jet, and hadron production in the target fragmen-tation region, as well as the structure of nuclear mat-ter and the spin composition of a nucleon. Indeed, many phenomenological models predict that a fi rst-order phase transition may occur in compressed baryon-rich matter created in the fixed-target collisions with low energy [1][2][3][4][5][6][7][8][9]. In this context, we find it important to study the potentialities off ered by fixed-target program in High Intensity Heavy Ion Accelerator Facility (HIAF) [10], Compressed Baryonic Matter (CBM) experiment at Facility for Antiproton and Ion Research ( FAIR) [ 11,12] , Multi-Purpose Detector ( MPD) experiment at Nuclotron-based Ion Collider fAcility ( NICA) [ 13], Beam Energy Scan ( BES) program at Relativistic Heavy Ion Collider ( RHIC) [14][15][16], and AFTER@ LHC at large hadron collider ( LHC) [ 17]. Especially for the AFTER@ LHC at LHC, the System for Measuring Overlap with Gas (SMOG) [18] would become a quarkonium, prompt photon, dilepton, jet, and heavy-flavour observatory since its large expected luminosity for a relatively high center-of-mass system (c.m.s.) energy of 115 GeV per nucleon with a 7 TeV proton beam and 72 GeV per nucleon with a 2.76 ATeV lead beam, with the high precision typical of the fixed-target mode [19][20][21][22][23][24]. In such collisions, one can investigate specific reactions, such as Compton scattering, where one of the colliding particle serves as a emitter of a photon or gluon and the other serves as a target. The scattering between photons and electrons is one of the most important physical processes that can generically be called Compton scattering noticed by A. H. Compton in 1923 [25,26]. The wavelength shift of the scattered photon, ∆λ = (ℏ/ )(1 − cos ) was firstly observed in Compton scattering. In this process, all servable phenomena involve * myparticle@ 1 6 3 . com photon-electron interactions and convincingly demonstrated that light comprises particles with energy and momentum. The calculation of the Compton scattering process by Dirac [ 27] and Gordon [ 28], and with full spin and relativistic corrections by Klein and Nishina [ 29], provided a convincing case of the Dirac equation. Indeed, the scattering of photon or gluon on matter through Compton scattering is a powerful tool to study its inner structure. Because of the widely applications, many efforts have been made to develop theoretical methods on ab initio calculations for Compton scattering process. Several approaches, such as the free electron approximation (FEA) [29,30], impulse approximation ( IA) [31][32][33][34], incoherent scattering function/ incoherent scattering factor (ISF) [35][36][37], and scattering matrix (SM) [38][39][40][41][42][43][44][45][46] are developed. In the present work, we study the Compton scattering for photon and gluon with the Klein-Nishina formula [ 29] in fi xed-target collisions by using the proton and lead beams at AFTER@LHC. In such collisions, we can investigate the par-ticular case of Compton scattering at the partonic level, such as γq→qγ and γq→qg, for the proton and lead beam running on the proton-target. In the proton-target rest frame, the energy of photons can become significant if the energy of the moving charge ( the proton and lead beam energy) , becomes ultra-relativistic, as at the LHC. In photon-hadron collisions, relativistically moving charged hadrons are accompanied by electromagnetic fields that can effectively be used as quasi-real-photon beams obtained from a semi-classical description of high-energy electromagnetic collisions. At very high energies, these quasi-real-photons are energetic enough to initiate hard interactions Indeed, the production of gluon from Compton scattering is also very interesting process because it can help to understand the dynamics of hadron collisions at high energies. It can test the calculation of perturbative quantum chromodynamics ( pQCD) , and probe the quark matter. In this paper, we report on a feasibility study of Comp-ton scattering for photon and gluon at fi xed-target collisions at AFTER@ LHC using LHC beams. In Sec. II we present the Compton scattering for photon and gluon in fi xed-target collisions for the proton and lead beam run-ning on the protontarget at AFTER@ LHC. The numerical results for photon and gluon from Compton scattering in p-p collisions and p-Pb collisions at AFTER@LHC energies are plotted in Sec. III. Finally, the conclusion is given in Sec.IV. II. GENERAL FORMALISM We study photon and gluon emission by relativistic heavy ions. When traversing a proton target, the projec-tile proton or ions could interact with the proton-target. Based on Klein-Nishina formula [29], the quark in Compton scattering are treated as free quark in the laboratory system ( proton-target), all binding eff ects and many-body interactions are neglected in the scattering process. The prompt photon and gluon can be produced by the initial photon and gluon interacting with the quark from protontarget, such as γq→qγ, gq→qγ, γq→qg, and gq→qg processes. In this situation, the factorized cross section of the Compton scattering for photon and gluon in fi xed-target collisions for the proton and lead beam running on the proton-target at AFTER@ LHC, can be written as where f is the quark distribution of the protontarget. In the laboratory system at rest, we chose the the quark distribution as ( uud) for the proton-target in the quark model [47,48]. x is the momentum fraction of photon ( or gluon). Based in the Klein-Nishina formula, the total cross section for Compton scattering ̂(γq → qγ),̂(gq → qγ),̂(γq → qg), and ̂(gq → qg) can be written as where is the energy of fi nal photon or gluon, = is the energy of the initial photon or gluon, the beam energy E beam is 7TeV and 13TeV for proton beam, as well as 2.76TeV and 5.02TeV for lead beam at LHC. In the laboratory system at rest, the momentum of the quark in the proton-target is p=(m q ,0,0,0), where m q is the mass of valence quark in proton. The equivalent photon spectrum for charged nucleus moving with a relativistic factor ≫1, can be obtained from the semiclassical description of high-energy electro-magnetic collisions. It will attain very large values since the nuclear charge then acts as a whole. The form factor is still not zero at Q 2~1 /b 2~1 /R 2 , the photon spec-trum of a pointlike nucleus with a cutoff at Q 2~1 / R 2 was used which can equivalent to neglecting the nuclear size for high-energy nuclear collisions, where R is the rdius of the nucleus and b is the impact parameters. A relativistic nucleus with the electric charge Ze moving with a relativistic factor ≫ 1 with respect to some observer develops an equally strong magnetic-field component. It resembles a beam of real photons where the photon spectrum function of low photon energies can be written as [49][50][51][52] 2 / 2 ( ) ln , where ω is the momentum of photon and R=b min is the radius of the nucleus (b min is the cutoff of impact parameters) . In the logarithmic approximation, the results can obtained from a purely classical treatment or by including the form factor are related to each other through a rescaling of the relativistic factor. For proton, the equivalent photon spectrum function can be obtained from the Weizsacker-Williams approximation [53][54][55] where α is the momentum fraction of photon, A p =1+ 0. here m p is the mass of proton, and at high energies Q m 2 in is given to a very good approximation by 2 2 /(1 − ). The gluon distribution for nucleus f g/ N ( x ) is given by [56,57]   where R A (x) is the nuclear modification factor [58], A is the nucleon number of the nucleus. The factor xg(x) is the gluon distribution function of nucleon, that can be parametrized by the functional form [59,60]   2 1 0 ( ) 1 , x  (1.11) here, the free parameters A 0 =30.4571, A 1 =0.5100, and A 2 =2.3823 are fixed by a global analysis of both the total cross-section data below medium energy [60]. III. NUMERICAL RESULTS We present the calculations of Compton scattering spectra for photon and gluon for proton and lead beam incident on a fi xed proton-target. Figs. 1 and 2 show the spectrum of gluon and photon production for proton and lead ions aimed at a proton target, respectively. The contribution for the quark-gluon Compton scattering is important since the high density gluons of the nucleons. Seen from proton-target at rest, the photon spectrum becomes important. Especially for the nucleus, the equivalent photon spectrum obtained from semiclassical description of high-energy electromagnetic collisions is / ∼ 2 ln , cross sections are enhanced by a factor of Z 2 and the relativistic factor becomes very LHC energies. FIG. 2 . The same as Fig. 1 but for photon production from Compton scattering for the proton and lead beam running on the proton-target at AFTER@ LHC. Therefore the contribution of photon and gluon production from photonquark Compton scattering is evident at AFTER@ LHC. But for the proton beam running on the proton-target, the contribution for the photon-quark Compton scattering is small comparing with gluon-quark Compton scattering process. IV. CONCLUSIONS In summary, we have investigated the production of photon and gluon from the Compton scattering with the Klein-Nishina formula for the proton and lead beam-s running on the fi xed proton-target at AFTER@ LHC. All binding eff ects and many-body interactions are neglected in the scattering process in the laboratory system ( protontarget), and the quarks of proton-target are treated as free quark. In such collisions, we have investigated the particular case of gluon-quark Compton scattering at the partonic level, such as gq→qγ and gq→ qg, as well as photon-quark Compton scattering (γq→qγ and γq→qg) since the equivalent photon spectrum from semiclassical description of highenergy electromagnetic collisions can become significant at LHC energies. Therefore, the Compton scattering for photon and gluon are important for us to check of the equivalentphoton approximation and understand the dynamics of hadron collisions at high energies, as well as probe the inner hadron structure.
2022-07-27T01:16:07.515Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "2ecc447a8f6a186bb7a6d62cb1f5786194e96313", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2ecc447a8f6a186bb7a6d62cb1f5786194e96313", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
214057323
pes2o/s2orc
v3-fos-license
Overshoot dependence on the cross-shock potential . Coherent downstream oscillations of the magnetic field in shocks are produced due to the coherent ion gyration and quasi-periodic variations of the ion pressure. The amplitude and the positions of the pressure maxima and minima depend on the cross-shock potential and upstream ion temperature. Two critical cross-shock potentials are defined: the critical gyration potential (CGP) which separates the cases of increase or decrease of the component of the velocity of the distribution center along the shock normal, and the critical reflection potential (CRP) above which ion reflection becomes significant. In weak very 5 low upstream kinetic-to-magnetic pressure ratio, β , shocks CRP exceeds CGP. For potentials below CGP the first downstream maximum of the magnetic field is shifted farther downstream and is larger than the second one. For higher potentials the first maximum occurs just behind the ramp and is lower than the second one. With the increase of the upstream temperature CGP exceeds the CRP. For potentials below CRP the effects of ion reflection are negligible and the shock profile is similar to that of very low β shocks. If the potential exceeds CRP ion reflection is significant, the magnetic field increase toward the overshoot 10 becomes steeper, and the largest peak occurs at the downstream edge of the ramp. understanding of the shock structure substantially improved due to these high quality observations and also due to numerical simulations. The frontier of the observational shock studies has shifted recently towards the processes occurring within few ion convective gyroradii in both directions from the ramp along the shock normal (Dimmock et al., 2012;Wilson et al., 2012Wilson et al., , 2014Johlander et al., 2016;Burgess et al., 2016;Eselevich et al., 2017;Wilson III et al., 2017;Gingell et al., 2017). Magnetic profiles of collisionless shocks are rarely monotonic, even for low-Mach numbers (Greenstadt et al., 1975;Green-5 stadt et al., 1980;Russell et al., 1982a;Mellott and Greenstadt, 1984;Farris et al., 1993;Balikhin et al., 2008;Russell et al., 2009;Kajdič et al., 2012). Since the peak value of the downstream oscillations increases with the increase of the Mach number, for a long time is was believed that overshoots are produced by ion reflection in super-critical shocks (Livesey et al., 1982;Russell et al., 1982b;Sckopke et al., 1983;Scudder et al., 1986;Mellott and Livesey, 1987). Super-critical shocks are the shocks with the Mach number exceeding the critical Mach number (Edmiston and Kennel, 1984;Kennel, 1987), so that 10 resistivity (Edmiston and Kennel, 1984) and thermal conductivity (Kennel, 1987) alone cannot provide necessary dissipation to sustain a shock. Eventually coherent downstream oscillations were observed at a very low-Mach number shock with the Alfvenic Mach number of M = 1.3 and magnetic compression of B d /B u = 1.3. The oscillating trail behind the ramp exhibited all features expected for a supercritical shocks, like the largest first peak, spatially periodical peaks, and gradual decrease of the peak amplitude. Such oscillations, albeit often less ordered, were found to be common in low-Mach number 15 shocks Kajdič et al., 2012). They were successfully explained as a result of coherent ion gyration upon crossing the shock ramp and subsequent collisionless relaxation due to gyrophase mixing Ofman et al., 2009;Ofman and Gedalin, 2013;Gedalin, 2015;Gedalin et al., 2015Gedalin et al., , 2018. It has been shown that the largest peak amplitude is determined mainly by the magnetic compression and cross-shock potential, while the damping rate of the oscillations is related to the upstream thermal-to-fluid speed ratio (Gedalin, 2015). Shapes of the downstream profile, like relative peaks of 20 the first oscillations and steepness of the magnetic field increase up to the first peak, vary considerably among observed shocks, even subcritical ones. Sufficient attention has not been devoted so far to the relation of the details of the magnetic oscillation pattern to the shock parameters and ion kinetics in the shock front. In particular, amplitudes and positions of the first peaks, which are not yet distorted by gyrophase mixing, may provide information about the cross-shock potential as well about the ion transmission and reflection. 25 Weak low-β shocks In what follows B u is the upstream magnetic field magnitude, T u is the upstream ion temperature, n u is the upstream ion number density, v T = T u /m is the upstream ion thermal speed, m is the ion mass, and β = 8πn u T u /B 2 u is the upstream kinetic-to-magnetic pressure ratio. The corresponding parameters for electrons are denoted by adding index e. In order to explain the basic mechanism of producing the downstream oscillations, let us consider a simplified model of a perpendicular 30 shock. We treat the shock as a jump in the magnetic field from B u to B d = RB u occurring within a narrow ramp. Accordingly, the fluid drift speeds upstream and downstream are V u and V d = V u /R. We shall also neglect the electron contribution in the plasma pressure and treat ions as a monoenergetic beam entering the shock with the velocity V u along the shock normal. The analysis is done in the normal incidence frame, where x is along the shock normal (toward downstream) and z is along the magnetic field. The equations of motion for ions inside the ramp arė We integrate the equations of motion across the ramp assuming |v y | ∼ v T V u , where v T is the thermal speed of upstream 5 ions. In this approximation we get Here u denotes the ion velocity at the downstream edge of the ramp while v(x) denotes the ion velocity at the position x inside the ramp. The second term in (3) is a small correction for ramp width (c/ω pi ) and v T /V u 1. Here (c/ω pi ) is the ion 10 inertial length. This small correction can be neglected for our purposes. In (4) the only term is small but nonzero. Thus, if the cross-shock potential is φ = s(mV 2 u /2e), the ion velocity just after crossing the jump is The ion motion is then described as a drift along the shock normal with the velocity V u /R and gyration around the magnetic field: where Ω d = eB d /m i c is the downstream ion gyrofrequency. For a cold beam all ions move together and the coordinate along the shock normal is given by 20 In general, it is not possible to derive an analytical expression for v x (x). For our purposes it is sufficient to restrict ourselves is a single-valued function. Let us define the critical gyration potential (CGP) s cr = 1 − 1/R 2 . For s < s cr the initial gyrophase ϕ ≈ 0, so that dv x /dx < 0 at the downstream edge of the ramp. For s > s cr the initial gyrophase is ϕ ≈ π, so that dv x /dx > 0 at the downstream edge of the 25 ramp. The total (dynamic and kinetic) ion pressure is given by where we have used the mass conservation nv x = n u V u . Pressure balance requires p i,xx +B 2 /8π = const, so that the magnetic field has maxima at the minima of the ion pressure. The latter occur at the minima of v x . For s < s cr the velocity decreases inside the ramp and keeps decreasing down to v x,min = V d − v ⊥ at Ω d t + ϕ = π which approximately corresponds to x l = πV d /Ω d for ϕ ≈ 0. Thus, the first maximum of the magnetic field occurs at x l at the pressure With the increase of s the relative contribution of u y in v ⊥ increases which moves the position of the first pressure minimum closer 5 to the ramp. For s > s cr the velocity decreases inside the ramp but starts to increase just behind it. Thus, the first maximum of the magnetic field occurs at x = 0 (the downstream edge of the ramp) at the pressure one has p h > p l which means that the first peak will be lower than the subsequent ones corresponding to the pressure minima For a cold ion beam the amplitude of further pressure oscillations does not change. Finite temperature leads to the divergence 10 of the ion trajectories and gradual gyrophase mixing. The divergence occurs already at the shock crossing since the downstream x,u − 2eφ/m, and the spread in v x,u results in a more substantial spread in v x,d . Moreover, there is nonzero v y which affects the gyration speed v ⊥ and ϕ, which are now different for different particles: The downstream ion pressure including finite temperature is obtained as an integral over the distribution It has been shown (Gedalin et al., 2015;Gedalin, 2016a) that finite temperature results in the collisionless relaxation during which the downstream ion distribution gyrotropizes and the pressure oscillations damp out. The relaxation is faster for larger v T /V u . In oblique shocks the mechanism of the generation of downstream oscillations is the same. Relaxation is faster for 20 lower angles θ between the shock normal and the upstream magnetic field (Gedalin, 2015;Gedalin et al., 2015). With the increase of the magnetic compression CGP rapidly increases. At R = 2 this critical value is s cr = 0.75. Although such high cross-shock potentials cannot be completely excluded, they are not observed often (Dimmock et al., 2012). Thus, we expect that in most shocks the potential is below CGP. Yet, in many shocks the first magnetic peak occurs right at the downstream edge of the ramp. In many cases it is also the largest peak. The above analysis is valid, strictly speaking, only 25 for sufficiently low-β = 8πn u T u /B 2 u shocks since the number of quasi-reflected and/or reflected ions rapidly increases with the increase of v T /V u , where v T = T u /m is the upstream thermal speed of ions (Gedalin, 2016b). In the narrow shock approximation all ions having initially mv 2 x /2 < eφ cannot cross the ramp. This mode of reflection is efficient when 1 − Deceleration of quasi-reflected ions inside the ramp can be expected to result in faster reduction of the ion pressure with the distance from the upstream edge of the ramp, that is, steeper increase of the magnetic field. Advanced test particle analysis vs observations The principles of the advanced test particle analysis have been described in detail by Gedalin and Dröge (2013). In brief, a model magnetic field profile is chosen, supplemented with a model electric field shape. The basic upstream plasma parameters, that is, ion and electron β and the angle between the shock normal and the upstream magnetic field θ are chosen and remain fixed during the analysis. Choosing a magnetic compression ratio R, the rest of the significant parameters are varied. With 5 each set of the parameters ions are numerically traced across the shock, the ion pressure is determined, and the corresponding magnetic field is derived from the pressure balance. The parameters are varied until reasonable agreement is achieved with the adopted model profile: the asymptotic values of the magnetic field should be equal and the fluctuations as small as possible. It has been found that the most influential parameters are the Alfvenic Mach number M and the normalized cross-shock potential s . There is also weak dependence on the shock width D. The magnetic profile chosen for the analysis is taken in the following 10 form: with B x = B u cos θ, B y ∝ dB z /dx, and E x ∝ dB z /dx. The coefficients of proportionality are constrained by the chosen values of the normal incidence frame cross-shock potential s N IF and the de Hoffman-Teller potential s HT (Goodrich and Scudder, 1984;Scudder et al., 1986;Schwartz et al., 1988). The latter was found to almost not affect the ion motion and was 15 kept s HT = 0.1 in the subsequent analysis. The post-tracing magnetic field was derived from the condition where the ion pressure was determined numerically and for the electron pressure the polytropic equation of state p e /n 5/3 was used, together with the quasineutrality. convenience. The coordinate is measured in r g = V u /Ω u . It is clearly seen that for the low potential the first peak is shifted farther downstream from the ramp and its amplitude is higher than that of the second peak. In the case of the higher potential the 25 first peak occurs at the downstream edge of the ramp and its amplitude is lower than that of the second one. Figure 2 illustrates the difference in the behavior of the normal component of the ion velocity, v x , in both cases. In the low potential case this component continues to decrease well beyond the ramp. Subsequent dips become more and more shallow with the distance from the ramp. In the high potential case v x starts to increase upon crossing the ramp. The second dip is deeper because lower v x are achieved, as explained above. With the increase of the magnetic compression CGP rapidly increases. For B d /B u = 2 CGP is rather high: c cr = 0.75. In most shocks the cross-shock potential is expected to be below this value (Dimmock et al., 2012). In low-β i plasmas all ions are directly transmitted across the shock without reflection and the above findings can be summarized as follows: a) below CGP 5 the first peak is the strongest, b) with the increase of the potential toward CGP the first peak moves closer to the ramp, c) upon crossing CGP the first peak stands at the downstream edge of the ramp and is no longer the strongest. Effects of ion reflection Ion reflection occurs in supercritical and marginally critical shocks. Ion reflection is a kinetic process and the fate of an ion entering a shock front depends on the initial velocity of the ion. There are two major modes of ion reflection: post-ramp and 10 in-ramp reflection. Post-ramp reflection occurs when an ion crosses the ramp, gyrates behind it, and returns back to the ramp to cross it toward upstream, but turns around again inside the ramp moving toward downstream. In-ramp reflection occurs when an ion changes its direction of motion inside the ramp and starts moving toward upstream. In both modes reflection occurs due to the combined effects of the electric and magnetic forces. Since the transition from upstream to ramp and further downstream is continuous, there is no strict separation between the two modes. Efficiency of the post-ramp reflection increases 15 most strongly with the increase of the magnetic compression B d /B u . It also increases with the increase of the ratio v T /V u = β i /2/M and with the decrease of the cross-shock potential s (Gedalin, 1996). The inverse dependence on the cross-shock potential is related to the fact that chances of a downstream gyrating ion to return to the ramp are higher if the gyration speed is higher, while the cross-shock potential takes energy from an ion upon crossing the ramp. Efficiency of in-ramp reflection increases with the increase of the ratio v T /V u and the cross-shock potential s Gedalin, 2016b). 20 It can be most simply explained in the approximation of specular reflection which ignores magnetic deflection. A particles THEMIS-c i 2011-03-30/08:09:40±360s with initial v x is reflected within the ramp if m i v 2 x /2 < qφ. For an initial Maxwellian distribution, 5% of incident ions are reflected if m i (V u − 2v T ) 2 /2 = qφ which allows us to define the critical reflection potential (CRP) s 5% = (1 − v T /V u ) 2 . In this approximation in-ramp reflection does not depend either on the magnetic compression or shock angle and is stronger for lower Mach numbers for given β i and s. In reality, magnetic deflection enhances the reflection which is never specular. In what follows we distinguish between reflected and quasi-reflected ions. Figure 4 illustrates the difference between ion populations and the terminology proposed by Gedalin (2016b) The difference is that quasi-reflected ions do not appear in the upstream region and do not contribute to foot formation. Each reflected or quasi-reflected ion makes a loop and moves along the shock front. As a result, all these ions acquire energy in NIF so that they should be clearly distinguished from the directly transmitted ions inside the ramp and behind it, both in a distribution plot or in a spectrogram. In both cases there should be a noticeable gap between the two. 15 In low-β i and small B d /B u both modes of reflection should be suppressed. In high Mach number shocks B d /B u is large while v T /V u = β i /2/M is small unless β i is large. In such shocks post-ramp reflection should dominate. In marginally critical and weakly supercritical shocks in-ramp reflection should dominate unless β i is too small. One can expect that in-ramp reflection would cause a sharper drop of the ion pressure and therefore a steeper increase of the magnetic field. A more detailed analysis can be done numerically where the cross-shock potential s and ion β i are fully controlled. Figure 5 shows the results of the test-particle adjustment for a shock with β i = 0.2 and magnetic compression R = 1.85. The adjustment of the downstream magnetic field predicted by the test-particle analysis to the initial model field is achieved with the cross-shock potential s = 0.65, which is below the corresponding CGP s cr = 0.7 but above the corresponding CRP s 5% = 0.49. The profile (left panel) shows a steeper increase toward the overshoot with the first peak exceeding the subsequent peaks. The same panel shows the ion orbits and the right panel shows a slice of the ion distribution which covers a half of the ramp adjacent to the upstream. Both clearly show the presence of a non-gyrotropic distribution of quasi-reflected ions. The incident and quasi-reflected populations are clearly separated in the velocity space and in energies. Figure 6 shows the results of the test-particle adjustment for the same compression ratio and cross-shock potential but lower 10 β i = 0.05. In this case there are very few quasi-reflected ions and the shock profile follows the low-β prescription shown in the top panel of Figure 2. The magnetic field increase toward the overshoot is less steep and the first peak is shifted further downstream. Figure 7 shows the results of the test particle adjustment for the same compression ratio and β i = 0.2 but lower cross-shock potential s = 0.4. In this case there are also very few quasi-reflected ions and the shock profile follows the low-β prescription 15 shown in the top panel of Figure 2. The magnetic field increase toward the overshoot is less steep and the first peak is shifted further downstream. Figure 8 shows the results of the test particle adjustment for the same compression ratio and β i = 0.4 but lower cross-shock potential s = 0.4. This value is slightly above the value of s 5% , so that the number of reflected ions is noticeable. Yet, the first maximum is shifted to downstream and the magnetic field increase toward the overshoot is not steep. Observations A detailed example of a pair of very low-Mach number shocks with the magnetic compression of B d /B u ≈ 1.2 and β i ≈ 0.08 5 is given by Pope et al. (2019), Figure 4, where the cross-shock potentials are also calculated from observations and shown to agree well with the theoretical findings above. Namely, the shock with a lower potential has the first peak higher than the successive ones, while the shock with a higher potential has the second peak higher than the first one. The first peak follows a steep magnetic field increase and is the largest. Thus, we expect that s 5% < s < s cr , which is in a good agreement with the adjusted value of s = 0.65. Figure 10 shows the corresponding gap for the analyzed shock in Figure 8. The spectrogram is made in the reference frame ("spacecraft") moving with the velocity 1.5V u along the shock normal. Figure 11 shows a similar gap in 2011/11/28 THEMIS C measured shock spectrogram in which reflected ions are detected. It is not possible to compare directly the gap for the 5 analyzed shock with observations since the analysis is done in the normal incidence frame while the observed spectrograms are produced in the spacecraft frame. Figure 12 shows the magnetic profile of a THEMIS C observed shock. This shock is also subcritical. It has a lower magnetic compression R = 1.4 with a slightly higher β i ≈ 0.2. The angle is large θ = 86 • while the Mach number is lower M ≈ 1.65. The corresponding CGP is s cr ≈ 0.5 and CRP is s 5% ≈ 0.38. The absence of ions reflected inside the ramp indicates insufficient 10 potential, so that we expect that s < 0.38. Adjustment using the advanced test particle analysis results in s ≈ 0.35. Discussion and conclusions Magnetic field measurements at heliospheric shocks are by far the best quality measurements with regard to both precision and resolution. The resolution of particle measurements is much worse: their precision is limited by geometric factors and the finite number of detectors. Measurements of electric field are typically the most difficult ones. Therefore, any cross-check of less reliable measurements on the basis of better ones is important. In particular, if measurements of the magnetic field would enable us to fill gaps in particle and cross-shock measurements that would substantially improve our ability to compare observations and theory. In the present paper we examine the implications of the shape of the downstream magnetic oscillation trail for the cross-5 shock potential. It appears that certain limitations can be placed on the potential using knowledge of the Mach number, magnetic compression, β i , and first peaks of the downstream magnetic field. The two critical kinetic phenomena are the gyration of the center of the incident distribution upon crossing the shock and the onset of ion reflection within the ramp. These two features are related to the two critical values of the cross-shock potential that have been defined in the simplified case of a narrow perpendicular shock. The derived CGP c cr = 1 − (B u /B d ) 2 and CRP c 5% = (1 − 2v T /V u ) 2 are approximations which do 10 not take into account properly the ramp width and the shock angle. Yet, they provide certain limits on possible cross-shock potentials consistent with the measured Mach number, β i , and magnetic compression. Numerical test particle analyses have shown that these limits are in good agreement with the parameters obtained by adjustment of the predicted profile to the required downstream asymptotic value. It is found that for s cr < s < s 5% the first downstream peak is at the downstream edge of the ramp and is weaker than the 15 second one. For s < s cr < s 5% and for s < s 5% < s cr the first downstream peak is shifted farther downstream and it is the strongest. For s 5% < s < s cr reflected ions are seen, the rise toward the overshoot is substantially steeper, the first downstream peak is at the downstream edge of the ramp and is the strongest. Thus, observations of the downstream magnetic oscillations may be used to place restrictions on the cross-shock potential. At this stage the analysis is limited to subcritical, marginallycritical and weakly supercritical shocks. Higher super-criticality will require separate study, including also post-ramp reflected ions.
2019-09-27T05:20:42.684Z
2019-08-29T00:00:00.000
{ "year": 2020, "sha1": "79a3297d36b16ddbf49cf262408e4cc6620995fe", "oa_license": "CCBY", "oa_url": "https://angeo.copernicus.org/articles/38/17/2020/angeo-38-17-2020.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "f591d5ce67d84ede754d1f9477330b3dffffaa97", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225930541
pes2o/s2orc
v3-fos-license
Anomaly Detection in Credit Card Transactions using Machine Learning Anomaly Detection is a method of identifying the suspicious occurrence of events and data items that could create problems for the concerned authorities. Data anomalies are usually associated with issues such as security issues, server crashes, bank fraud, building structural flaws, clinical defects, and many more. Credit card fraud has now become a massive and significant problem in today's climate of digital money. These transactions carried out with such elegance as to be similar to the legitimate one. So, this research paper aims to develop an automatic, highly efficient classifier for fraud detection that can identify fraudulent transactions on credit cards. Researchers have suggested many fraud detection methods and models, the use of different algorithms to identify fraud patterns. In this study, we review the Isolation forest, which is a machine learning technique to train the system with the help of H2O.ai. The Isolation Forest was not so much used and explored in the area of anomaly detection. The overall performance of the version evaluated primarily based on widely-accepted metrics: precision and recall. The test data used in our research come from Kaggle. they occur at a rapid pace [3]. These illegal activities aim to withdraw illegitimate funds from an account or purchase goods and services without paying their own money, which causes severe damage to the credit card holders and banking organizations [8]. Credit card fraud has become a significant obstacle to e-commerce growth, which has a drastic effect on the economy. Thus, identification of fraud is crucial and vital, and the actions of these illegal activities may observe in the background to eliminate it and avoid it against repeated incidents [12]. To stop such frauds, we required an Automated Fraud detection system that will be capable of classifying fraudulent transactions from genuine ones [9]. To solve this problem machine learning can play a significant role in building such type of detection system that can help to prevent such Credit Card fraud [10]. Machine learning consists of methods to derive useful information from vast volumes of data to aid in the decision-making process and predictive accuracy [7] [8]. The Credit Card Fraud Detection system mainly involves distinguishing fraudulent transactions from the authentic ones [11]. Various challenges while creating fraud detection system  Unbalance data: Less than 0.5 % of credit card transactions are a fraud.  Operational Efficiency: Less than 8 sec to flag a transaction.  Incorrect Flagging: Avoid harassing real customers. Fig 1: Classifying fraudulent transactions To train such a fraud detection system can carry out in three ways: • Supervised: In this type of learning, the supervisor instructs the machine using the well-"labeled" dataset. It means they have detailed information about the data items and observations, already tagged with the correct solution. After building, the model provided with a new dataset to analyze the model that classifies the data. • Semi-Supervised: In semi-supervised learning, to train a machine done by using both labeled and unlabeled datasets. It is a combination of both supervised and unsupervised learning. Moreover, it employed more commonly than supervised techniques. The dataset contains more unlabeled data than labeled one. • Unsupervised: In unsupervised inferences from the datasets that consist of input data without labeled responses. Among all three strategies, this is the most commonly used method. Unsupervised learning is a self-methodology in which the model assumes exceptions occur less frequently in a dataset. It allows you to perform more complex processing tasks and can be more unpredictable compared with other methods. In this project, we will use framework H20.ai framework implement the Isolation forest, which is an unsupervised learning method. Other algorithms identify anomalies with the help of profiling usual data points, but the Isolation forest is an ensemble method. It creates a tree-like structure that helps to make decisions. These anomalies can detect near the root of the tree and then can further analyze [13]. II. RELATED WORK In paper [1], the authors propose an anomaly detection methodology using an artificial neural network and decision tree. This methodology consists of two-level, firstly a decision tree is used to provide a new dataset, which is passed into the Multilayer neural network to classify the data. This two-level system results in a meager false detection rate. A detailed study of various machine learning algorithms and ANN is done by the authors [2]. In which they found Artificial Neural Network gives more precise results compared to K-Nearest Neighbour (KNN), Logistic regression, Support Vector Machine (SVM) and Decision tree [10]. Another paper [3] suggests that the Random Forest methodology provides the most reliable effects, accompanied by Logistic Regression and SVM. In paper [4], the outcome is that the decision tree approach performs better than the SVM approach in solving the problem. Moreover, as the size of the datasets grows, the degree of accuracy of SVM-based system surpasses the accuracy of decision tree-based system. However, the amount of fraud observed by SVM models is still far less than the sum of frauds identified by decision tree approaches. The system used in paper [5] presents a novel approach to detect fraudulent transactions uses multiple anomaly detection algorithms to detect fraud. Paper [6] applied Outlier detection and the KNN methodology to optimize the result in fraud detection problems. The primary aim was to improve the fraud detection rate and reduce false alarm. III. METHODOLOGY The method presented in this paper uses one of the latest machine-learning algorithms to identify anomaly behavior called Isolation Forest. A. Isolation Forest Isolation forest is an unsupervised ensemble, and it based on the concept of isolation "separate-away" anomalies [13]. No point-based distance calculation and no profiling of regular instances are done. Instead, the Isolation forest builds an ensemble of decision trees; the principle behind this technique is to isolate anomalies through partitions. Here ensemble of decision Trees is created for a given data set, and path length is calculated for each data, and the data points which have the shortest average path length are considered as anomalies. B. H2O.ai H2O supports the most widely used supervised and unsupervised machine learning algorithms. It is a fully open-source, ultra-high performance, in-memory, and predictive analytics machine learning platform with linear scalability. It includes gradient boosted machines, generalized linear models, deep learning, and allowing without needing expertise in deploying or tuning machine learning models. Anomaly Detection It has two stages training and testing: Training stages involve building Isolation forest, and testing stages involve passing each data point through each tree to calculate the average number of edges required to reach an external node. We begin by building several decision trees by selecting an attribute randomly. Then we choose a split value from the maximum and minimum values of that randomly selected attribute in an unpredictable way. Ideally, each terminating node of the tree contains one observation from the data set, which isolates the sample. We presume that if one finding in our data set is identical to another, it would require further random splits to isolate the finding precisely, as opposed to isolating an outlier. As we created multiple decision trees, which sum as an isolation forest, for each observation, we calculate the path length. The amount of splitting needed to distinguish the observation is equivalent to the path length from the root node to the leaf node. Then this path length is averaged over a forest of a decision tree, which serves as a scale for the anomaly and further use for determining the final anomaly score. Less the path length, the more likely it is to be anomalous. The h2o frame containing the results of the predictions: we forecast presenting a normalized incongruity score. We are working unsupervised manner! We need a threshold. If we had an estimation of the raw number of outliers in our dataset, we can find the score's equivalent quantile value and use it for our predictions as a threshold. The corresponding generated quantile price score can be observed and used it as a limit value for the predictions made by our generated h20 frame. We use the edge to classify the abnormal segment in the dataset. Evaluation Since the isolation forest is an unsupervised technique, we need classification metrics that should not be dependent on the prediction threshold and give an exact value of scoring. For this, two such metrics is Area under the Precision-Recall Curve (AUCPR) and Area under the Receiver Operating Characteristic Curve (AUC). AUC is a statistic measuring how often a type of binary classification distinguishes between real and false positives. The optimal AUC score is 1; wild approximation is the baseline value of 0.5. AUCPR is a binary classification's precision-recall trade-off utilizing various thresholds of the continuous prediction ranking. The highest score for AUCPR is 1; the baseline score is positive class relative count. AUCPR is preferred over AUC for an extremely unbalanced dataset because the AUCPR is very sensitive to true positives, false negatives, and false positives while not worrying about True negative. We can see that the binary compound isolation forest implementation on the average scores equal to the scikit-learn implementation. The significant advantage of the binary compound is that the ability to rescale too many nodes and work seamlessly with Apache Spark. This enables you to method extraordinarily large datasets, which could be crucial within the transactional information setting. The Predictive accuracy for this proposed classification model using the Isolation forest to detect fraud in credit card transactions observed to be 98.72 % by AUCPR, which is significantly useful, and the fraud detection error reduced. IV. CONCLUSION AND FUTURE SCOPE We present a technique in this project that exhibits an amazing ability to distinguish anomalies and mere inliers by creating several decision trees for every data point. For better evaluation of our methodology, we use Area Under Precision-Recall curve (AUC), which shows better results than the Area Under ROC curve. Lastly, we demonstrate the efficiency of our approach in a fraud detection model observed to be 98.72%, which indicates a significantly better approach than other fraud detection techniques. The only limitation to the fraud detection system is the unavailability of the balanced dataset for training purposes and the shortage of the dataset. If the financial institutions make available the critical data set of various fraudulent activities, the research outcome will be more efficient and qualitative.
2020-07-23T09:04:43.506Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "b1b6dbcd9cdedb1522bbe64d6cfa69f5da1af7d8", "oa_license": null, "oa_url": "https://doi.org/10.21276/ijircst.2020.8.3.5", "oa_status": "GOLD", "pdf_src": "ElsevierPush", "pdf_hash": "65d0d2269f0987c179529fef155926caa7f3daa7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
267681922
pes2o/s2orc
v3-fos-license
Radio spectra of pulsars fitted with the spectral distribution function of the emission from their current sheet In their catalogue of pulsars' radio spectra, Swainston et al. (2022, PASA, 39, e056) distinguish between five different forms of these spectra: those that can be fitted with (i) a simple power law, (ii) a broken power law, (iii) a low-frequency turn-over, (iv) a high-frequency turn-over or (v) a double turn-over spectrum. Here, we choose two examples from each of these categories and fit them with the spectral distribution function of the caustics that are generated by the superluminally moving current sheet in the magnetosphere of a non-aligned neutron star. In contrast to the prevailing view that the curved features of pulsars' radio spectra arise from the absorption of the observed radiation in high-density environments, our results imply that these features are intrinsic to the emission mechanism. We find that all observed features of pulsar spectra (including those that are normally fitted with simple or broken power laws) can be described by a single spectral distribution function and regarded as manifestations of a single emission mechanism. From the results of an earlier analysis of the emission from a pulsar's current sheet and the values of the fit parameters for each spectrum, we also determine the physical characteristics of the central neutron star of each considered example and its magnetosphere. INTRODUCTION Attempts at explaining the radiation from pulsars has so far been focused mainly on mechanisms of acceleration of charged particles (see, e.g., the references in Melrose et al. 2021): an approach spurred by the fact that, once the relevant version of this mechanism is identified, one can calculate the electric current density associated with the accelerating charged particles involved and thereby evaluate the classical expression for the retarded potential that describes the looked-for radiation.In the present paper, however, we evaluate the retarded potential, and hence the generated radiation field, using the macroscopic distribution of electric charge-current density that is already provided by the numerical computations of the structure of a non-aligned pulsar magnetosphere (Ardavan 2021, Section 2).Both the radiation field thus calculated and the electric and magnetic fields that pervade the pulsar magnetosphere are solutions of Maxwell's equations for the same charge-current distribution.These two solutions are completely different, nevertheless, because they satisfy different boundary conditions: the far-field boundary conditions with which the structure of the pulsar magnetosphere is computed are radically different from the corresponding boundary conditions with which the retarded solution of these equations (i.e. the solution describing the radiation from the charges and currents in the pulsar magnetosphere) is derived (see Section 3 and the last paragraph in Section 6 of Ardavan 2021). Numerical computations based on the force-free and particle-in-★ Email address: ardavan@ast.cam.ac.uk cell formalisms have now firmly established that the magnetosphere of a non-aligned neutron star entails a current sheet outside its light cylinder whose rotating distribution pattern moves with linear speeds exceeding the speed of light in vacuum (see Spitkovsky 2006;Kalapotharakos et al. 2012;Tchekhovskoy et al. 2016; and the references in Philippov & Kramer 2022).However, the role played by the superluminal motion of this current sheet in generating the multi-wavelength, focused pulses of radiation that we receive from neutron stars is not generally acknowledged.Given that the superluminally moving distribution pattern of this current sheet is created by the coordinated motion of aggregates of subluminally moving charged particles (see Ginzburg 1972;Bolotovskii & Bykov 1990), the motion of any of its constituent particles is too complicated to be taken into account individually.Only the densities of charges and currents enter the Maxwell's equations, on the other hand, so that the macroscopic charge-current distribution associated with the magnetospheric current sheet takes full account of the contributions toward the radiation that arise from the complicated motions of the charged particles comprising it.The radiation field generated by a uniformly rotating volume element of the distribution pattern of the current sheet in the magnetosphere of a non-aligned neutron star embraces a synergy between the superluminal version of the field of synchrotron radiation and the vacuum version of the field of Čerenkov radiation.Once superposed to yield the emission from the entire volume of the source, the contributions from the volume elements of this distribution pattern that approach the observation point with the speed of light and zero acceleration at the retarded time interfere constructively and form caustics in certain latitudinal directions relative to the spin axis of the neu-tron star.The waves that embody these caustics are more focused the farther they are from their source: as their distance from their source increases, two nearby stationary points of their phases draw closer to each other and eventually coalesce at infinity.By virtue of their narrow peaks in the time domain, the resulting focused pulses thus procure frequency spectra whose distributions extend from radio waves to gamma-rays (Ardavan 2021, Table 1 and Section 5.4). This paper is concerned with the radio spectra of pulsars.Its task is to ascertain whether the spectrum of the caustics generated by a pulsar's current sheet (Section 2) can account for all five categories of spectral shapes catalogued 1 by Swainston et al. (2022).To this end, it presents fits to two examples (with the largest number of known data points) of the catalogued spectra in each category (Section 3) and relates the values of their fit parameters to the physical characteristics of the central neutron star of the corresponding pulsar and its magnetosphere (Section 4). RADIO SPECTRUM OF THE CAUSTICS GENERATED BY THE SUPERLUMINALLY MOVING CURRENT SHEET The frequency spectrum of the radiation that is generated as a result of the superluminal motion of the current sheet in the magnetosphere of a non-aligned neutron star was presented, in its general form, in equation ( 177) of Ardavan (2021, Section 5.3). In a case where the magnitudes of the vectors denoted by P and Q in equation (177) of Ardavan (2021) are appreciably larger than those of their counterparts, P and Q , and the dominant contribution towards the Poynting flux of the radiation is made by only one of the two terms corresponding to = 1 and = 2, e.g. = 2, that equation can be written as where Ai and Ai ′ are the Airy function and the derivative of the Airy function with respect to its argument, respectively, = 2/ is the frequency of the radiation in units of the rotation frequency /2 of the central neutron star, and 0 and 21 are two positive scalars.The coefficients of the Airy functions in the above expression stand for P 2 = −1/2 P (2) 2 when ≥ 2 and for 2 when < 2 , in which the complex vectors 2 are defined by equations ( 138)-(146) of Ardavan (2021) and 2 designates a threshold frequency. The variable 21 determines the separation between two nearby stationary points of the phases of the received waves: the smaller the value of 21 , the more focused is the observed radiation and the higher is its frequency content (Ardavan 2021, Section 4.5).The radio component of the present radiation is mostly generated by values of 21 that range from 10 −4 to 10 −2 .In this paper, we replace 2 , which only have weak dependences on 21 , by their values for 21 = 10 −3 and treat them as constant parameters. Evaluation of the right-hand side of equation ( 2) results in 1 https://all-pulsar-spectra.readthedocs.io/en/latest/ where = 0 when < 2 and = 2 when ≥ 2 , and ℑ and * denote an imaginary part and the complex conjugate, respectively.The above spectrum is emblematic of any radiation that entails caustics (see Stamnes 1986). To take account of the fact that the parameter 21 assumes a non-zero range of values across the (non-zero) latitudinal width of the detected radiation beam (Ardavan 2021, Section 4.5), we must integrate with respect to 21 over a finite interval 0 ≤ 21 ≤ 0 with 0 ≪ 1 and 0 ≤ < 1. Performing the integration of the Airy functions in equation ( 2) with respect to 21 by means of Mathematica, we thus obtain where and 2 3 and 24 31 are respectively the generalised hypergeometric function (see Olver et al. 2010) and the generalised Meijer G-Function 2 .The variable that appears in the above expressions is related to the frequency of the radiation via = 4 0 3 /(3).The scale and shape of the spectrum described by equation ( 4) depend on whether equals 0 or 2 (i.e. on whether the dimensionless frequency lies below or above the threshold frequency 2 ) and on the five parameters , , , 0 and : parameters whose values are dictated by the characteristics of the magnetospheric current sheet (see Section 4).The parameters , and determine the shape of the spectral distribution while the parameters and 0 respectively determine the position of this distribution along the flux-density (F ) and the frequency () axes. FITS TO THE DATA ON EXAMPLES OF VARIOUS FORMS OF RADIO SPECTRA In this section, we choose from each of the five galleries of pulsar spectral shapes catalogued by Swainston et al. ( 2022) two examples with the largest number of known data points and fit them with the spectral distribution function described by equation ( 4).In the case of each example, we use Mathematica's 'NonlinearModelFit' procedure 3 and the statistical information that it provides to determine the values of the fit parameters in equation ( 4) and their standard errors.Where, owing to the complexity of the expression in equation (4), this procedure fails to work and so the fits to the data are obtained by elementary iteration, only the values of these parameters are specified. The results are presented in Figs. (1)-( 10) and their captions.The horizontal and vertical axes in these logarithmic plots are marked with the values of F and MHz , respectively, where MHz stands for the frequency of the radiation in units of MHz. It can be seen from Figs. (1)-( 10) that the fit residuals in the case of each pulsar are smaller than the corresponding observational errors for the majority of the data points.Since there are no two values of any of the fit parameters for which the spectrum described by equation ( 4) has the same shape and position, the specified values of the fit parameters are moreover unique. Derivation of the connecting relations From equations ( 5), ( 3) and (1), it follows that the parameters , , and 0 in equation ( 4) are related to the characteristics of the source of the observed radiation via the quantities , 0 , 2 that appear in the expression for the flux density F .In this section we use the results of the analysis presented in Ardavan (2021) to express these quantities in terms of the inclination angle of the central neutron star, , the magnitude of the star's magnetic field at its magnetic pole, 0 = 10 12 B0 Gauss, the radius of the star, 0 = 10 6 cm, the rotation frequency of the star, = 10 2 P−1 rad/s, and the spherical polar coordinates, = kpc = 3.085 × 10 21 cm, and , of the observation point in a frame whose centre and -axis coincide with the centre and spin axis of the star. For certain values of , denoted by 2 , the function 2 () has an inflection point (see Ardavan 2021, Section 4.4, andArdavan 2023c, Section 2).For any given inclination angle , the position 2 of this inflection point and the colatitude 2 of the observation points for which 2 () has an inflection point follow from the solutions to the simultaneous equations 2 / = 0 and 2 2 / 2 = 0. (Explicit expressions for the derivatives that appear in these equations can be found in Appendix A of Ardavan 2021.)For values of sufficiently close to 2 , the separation between the maximum = 2max and minimum = 2min of 2 and hence the value of 21 are small.In other words, the focused radiation beam is centred on the colatitude 2 plotted in Fig. 11 (Ardavan 2021, Section 5.5).Since the fit parameter 0 , which denotes the maximum value assumed by 21 , lies between 10 −2 and 10 −4 for most of the examples considered in Section 3, we have chosen the separation between = 2max and = 2min in each example such that the value of 21 in equation ( 17) is of the order of 10 −3 . Equations ( 5), (3), and ( 9)-( 11) jointly yield when = 0, and = 6.76 × 10 15 6 0 B2 0 4 −2 P−1 κth Jy, ( 22) when = 2. Equations ( 20) and ( 22) can be written as κth = κobs and κth = κobs , respectively, where and While κobs and κobs only contain the observed parameters of the pulsar and its emission, the values of κth and κth are determined by the physical characteristics of the magnetospheric current sheet that acts as the source of the observed emission.For a given value of 21 , the right-hand sides of equations ( 21) and ( 23) are functions of the inclination angle and the observer's distance R only (see Fig. 12).The values of the fit parameters together with the relations in equation ( 24) and the plots of κth and κth in Fig. 12 thus enable us to connect the parameters of the fitted spectra to the physical characteristics of their sources. Application of the connecting relations to the fitted spectra In this section, we use the relations derived in Section 4.1, the values of the fit parameters given in the captions to Figs. 1-10 and the data listed in the ATNF Pulsar Catalogue (Manchester et al. 2005) to determine (or set limits on) certain attributes of the central neutron stars of the pulsars considered in Section 3 and their magnetospheres. Once the values of the fit parameters 0 and given in the caption to Fig. 1, the period (0.405 s) and the distance (7 kpc) of the pulsar J1915+1009 are inserted in equation ( 26), the resulting value of κobs and the second member of equation ( 24) yield If the central neutron star of this pulsar has radius 10 6 cm (i.e. = 1) and the magnetic field 2.51 × 10 12 Gauss at its magnetic pole (i.e.B0 = 2.51) as predicted by the formula for magnetic dipole radiation, then equation ( 27) and the curve delineated by the red dots in Fig. 12 imply that the value of the angle between the rotation and magnetic axes of J1915+1009 is either 3.8 • or 42.3 • .(Here, and in other similar cases, a choice between the two possible values of can be made by comparing the pulse profile of the pulsar in question with the theoretically predicted ones presented in Section 5.1 of Ardavan 2021.)Depending on whether = 3.8 • or 42.3 • , the direction along which the radiation is observed forms the angles 2 = 3.8 • or 35.1 • with the spin axis of this pulsar (see Fig. 11). Within the framework of the present emission mechanism, the value of B0 can be significantly different from that given by the formula for magnetic dipole radiation, in which case equation ( 27) and Fig. 12 merely determine the required value of B0 2 as a function of . In the same way, the values of 0 and in the caption to Fig. 2 together with the period 0.361 s and the distance 1.26 kpc yield If is set equal to 1 and the value of B0 is assumed to be that given by the formula for magnetic dipole radiation, i.e. 0.879, then equation (28) and the red curve in Fig. 12 imply that equals either 0.29 • or 62.8 • .According to Fig. 11, the values of 2 corresponding to = 0.29 • and 62.8 • are 0.29 • and 97.2 • , respectively. Next, the values of 0 and in the caption to Fig. 6 together with the period 0.105 s and the distance 7.11 kpc yield In this case, too, the constraint κ−1/2 th ≥ 1.42 (see Fig. 12) implies that either ≥ 1.22, if B0 has its magnetic-dipole-radiation value 0.198, or B0 ≥ 0.294, if = 1. Unlike the examples in Figs.1-6 for which the value of in equation ( 4) equals 2, the example in Fig. 7 is fitted with a flux density for which = 0.The first member of equation ( 24), the values of the fit parameters 0 and in the caption to Fig. 7 and the period 0.307 s and the distance 5.92 kpc jointly yield B0 2 = 4.03 × 10 −6 κ−1/2 th for 1829 − 1751. (33) The resulting value of κ−1/2 th for B0 = 1.32 and = 1, i.e. 3.28 × 10 5 , implies that ≃ 90 • in this case (see the curve delineated by the blue dots in Fig. 12). In the case of the example shown in Fig. 8, too, equals zero so that the corresponding values of the fit parameters 0 and together with the period 0.306 s and the distance 5.03 kpc yield B0 2 = 1.58 × 10 −6 κ−1/2 th for 1835 − 0643. (34) If B0 has the value 3.56 that is obtained from the formula for magnetic dipole radiation, then κ−1/2 th = 2.25 × 10 6 for = 1 and so ≃ 90 • according to Fig. 12. For the double-turn-over spectrum shown in Fig. 9, equals 2 and the second member of equation ( 24) in conjunction with the values of the fit parameters 0 and , the period 0.277 s and the distance 0.31 kpc yield If is set equal to 1 and the value of B0 is assumed to be that given by the formula for magnetic dipole radiation, i.e. 0.518, then equation ( 35) and the red curve in Fig. 12 imply that equals 46.5 • .Moreover, the colatitude along which this pulsar is observed has the value 2 = 60.8 • according to Fig. 11. CONCLUDING REMARKS No emission mechanism is as yet identified in the published literature on pulsars whose spectral distribution function can fit the data on all five categories of spectral shapes depicted in Figs.1-10 (see Jankowski et al. 2018;Swainston et al. 2022, and the references therein).Curved or gigahertz-peaked spectra are generally thought to reflect the free-free absorption of the pulsar radiation in ionised high-density environments rather than being intrinsic to the emission mechanism (see Rajwade et al. 2016;Kijak et al. 2021, and the references therein).As we have seen, however, the spectral distribution function of the caustics that are generated by the superlluminally moving current sheet in the magnetosphere of a non-aligned neutron star single-handedly accounts for all observed features of pulsar spectra (including those that are normally fitted with simple or broken power laws). A study of the characteristics of the radiation that is generated by this superluminally moving current sheet has already provided an all-encompassing explanation for the salient features of the radiation received from pulsars: its brightness temperature, polarization, spectrum, profile with microstructure and with a phase lag between the radio and gamma-ray peaks (Ardavan 2021(Ardavan , 2022) ) and the discrepancy between the energetic requirements of its radio and gamma-ray components (Ardavan 2023b).Fits to the exceptionally broad gamma-ray spectra of the Crab, Vela and Geminga pulsars, for example, are provided by the spectral energy distribution of this radiation over the entire range of photon energies so far detected from them (Ardavan 2023c,a). Detailed analyses of the structure of the magnetospheric current sheet and the coherent emission mechanism by which this sheet creates the caustics underlying the present spectral distribution function can be found in Ardavan (2021).A heuristic account of the mathematical results of those analyses in more transparent physical terms is presented in Ardavan (2022) and Ardavan (2023c, Section 2). Finally, the following cautionary remark concerning a common misconception is in order: it is often presumed that the plasma equations used in the numerical simulations of the magnetospheric structure of an oblique rotator should, at the same time, predict any radiation that the resulting structure would be capable of emitting (see, e.g.Spitkovsky 2006;Kalapotharakos et al. 2012).This presumption stems from disregarding the role of boundary conditions in the solution of Maxwell's equations.As we have already pointed out, the far-field boundary conditions with which the structure of the pulsar magnetosphere is computed are radically different from the corresponding boundary conditions with which the retarded solution of these equations (i.e. the solution describing the radiation from the charges and currents in the pulsar magnetosphere) is derived (see Section 3 and the last paragraph in Section 6 of Ardavan 2021). Figure 11 .Figure 12 . Figure11.The relationship between the colatitude 2 on which the focused radiation beam is centred and the star's inclination angle . together with the period 0.549 s and the distance 0.4 kpc yield B0 2 = 0.38 κ−1/2 th for 0452 − 1759, (30) an equation that is satisfied by the magnetic-dipole-radiation value of B0 (i.e.1.8) and = 1 if = 57.7 • and so 2 = 84.6 • .The values of the fit parameters 0 and used for plotting Fig. 5 together with the period 0.715 s and the distance 1.67 kpc yield B0 2 = 25.
2024-02-16T06:45:17.117Z
2024-02-15T00:00:00.000
{ "year": 2024, "sha1": "c07d049fd0f07fed1f7cc9c55183b57984abc76c", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stae774/56991861/stae774.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "bbc216b97cf9533d3e2d2739fbeba4b3ffd86ff5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13709593
pes2o/s2orc
v3-fos-license
Dual novel mutations in SLC26A2 in two siblings with multiple epiphyseal dysplasia 4 from a Chinese family: a case report Background Multiple epiphyseal dysplasia (MED) is a heterogeneous genetic condition characterized by variable phenotypes, such as short stature (mild to moderate), joint deformities, abnormal gait, scoliosis, and brachydactyly. Recessive mutations in the SLC26A2 gene cause a phenotype of multiple epiphyseal dysplasia-4 (MED-4). In the present study, we identified novel compound heterozygous mutations in the SLC26A2 gene in a Chinese family with two affected sibs with MED-4. Case presentation Radiographs revealed hip dysplasia, brachydactyly and scoliosis in patient 1. Radiological examinations in patient 2 also showed hip dysplasia recently. Both of them were diagnosed with MED-4. SLC26A2 c.824 T > C and SLC26A2 c.1198C > T were identified in two siblings in this family, which were inherited from both parents, one mutation from each. Conclusions This is the first Chinese MED-4 family attributed to SLC26A2 mutations, and these results show that these novel compound heterozygous mutations in SLC26A2 contribute to MED-4. Background Multiple epiphyseal dysplasia (MED) is a heterogeneous genetic condition characterized by variable phenotypes, such as short stature (mild to moderate), joint deformities, abnormal gait, and early-onset osteoarthritis [1]. MED is associated with structural anomalies in epiphyses and delayed ossification of the epiphyses with small, irregular ossification centers, resulting in moderate shortening. Patients usually appear normal at birth and have good muscular development and normal intelligence [2]. We present two siblings with MED-4 from an eastern Chinese family. Genetic analysis revealed compound heterozygotes for two novel heterozygous mutations in SLC26A2. Further genetic studies and clinical evaluation of their parents revealed that these two mutations were from their father and mother, respectively. This study reported that compound heterozygous mutations in SLC26A2 contributed to MED-4. Patient 1 Clinical findings A 12-year-old girl was born normally to nonconsanguineous, healthy eastern Chinese parents after a normal pregnancy. The body length at birth was 51 cm. She was referred to our hospital for diagnosis and treatment. There was no family history of endocrine diseases and musculoskeletal problems. Her parents noticed unequal leg lengths around the age of 6 years and radiography revealed coxa plana. Abnormal gait and limping were noticed at 9 years. The rotation function of the right leg was limited. For these reasons, she underwent right hip arthroplasty and resection of cartilago acetabularis in another hospital. Postoperative pathology revealed chronic synovitis. Half a year after the surgery, a lump with high skin temperature was noticed by her parents in the left femoribus internus. The flexion-extension function of the left leg was limited. Physical examination revealed the following: height 138 cm. Her intellectual development and hearing were normal. She had brachydactyly, bilateral skewfoot, and lumbosacral scoliosis. The movements of both hips were limited. She did not have a cleft palate, cephalofacial deformities, or respiratory insufficiency. Routine analysis for common skeletal dysplasia excluded any thyroid or growth hormone disorders and immunopathies. Upon analysis, bone metabolism appeared normal. Radiological findings Radiological documentation at the ages of 7 and 12 years revealed hip dysplasia with the following deformities: short femoral necks, flattened and irregular femoral heads, and early closure of epiphysis (Fig. 1a). Spinal radiographs at the ages of 11 and 12 years confirmed evolving scoliosis, which appeared to be structural vertebral deformity (Fig. 1b). Hand radiographs confirmed the brachydactyly and significantly flattened articular surface. The metacarpi and phalanges were mild shorten (Fig. 1c). Patient 2 Clinical findings A 6-year-old boy, the younger brother of patient 1, was also born to the same parents. The pregnancy was normal, and he was normal at birth without cleft palate or cephalofacial deformities. The body length at birth was 50 cm. An abnormal gait, waddling with short steps, was also noticed recently. Radiological findings Roentgenologic bone survey showed hip dysplasia with the following abnormalities: both femoral necks were short with flattened heads; acetabulum dysplasia, and secondary ossification center of femur dysplasia (Fig. 2). Molecular data Written informed consent was obtained from the parents of patient 1 and patient 2. Genomic DNA was extracted from peripheral blood using an e.Z.N.A.® Blood DNA Kit (Omega Bio-tek, Norcross, GA, USA) according to the manufacturer's protocol. A total of 363 genes, including COL1A1, COL1A2, COL11A1, and other related genes, were analyzed by Targeted NGS in patient 1. The total size of the target regions of the capture array was 3. 0 Mb. After filtering out common variants and neutral or benign mutations (allele frequency of ≥0.5% in dbSNP (http://www.ncbi.nlm.nih.gov/projects/SNP/), the 1000 Genomes Pilot Project Data (http://www.1000genomes.org/) or BGI in-house database, which includes 1092 normal subjects.), two mutations, SLC26A2 c.824 T > C (NM_ 000112.3) and SLC26A2 c.1198C > T were identified (Fig. 3a). Both variants were absent from all databases, including 1000genomes, dbsnp, ESP6500, and the BGI in-house database, and both variants were predicted as functional damaging in MutationTaster, Polyphen-2 and SIFT ( Table 1). Both of the two mutations are evolutionarily conserved (Fig. 3b). Then sanger sequencing revealed two variants in patient 2; SLC26A2 c.824 T > C and SLC26A2 c.1198C > T were identified in Mother and Father, respectively (Fig. 3c). Discussion and conclusion Multiple epiphyseal dysplasia is a heterogeneous group of skeletal dysplasias characterized by dysplastic epiphyses at multiple sites [14]. Superti-Furga et al. first reported a homozygous SLC26A2 mutation (c.835C > T, p. Arg279Trp) in a 36-year-old man of tall-normal stature with MED-4 [12]. Variable phenotype with variable joint manifestations and normal to short stature were described in 18 individuals with MED-4 [8]. The deformity of clubfoot was observed in approximately 28% of the MED-4 patients. The most frequent radiographic finding Fig. 2 Hip radiographs of patient 2 during the ages from 3 to 6 years. Roentgenologic bone survey showed hip dysplasia with the following abnormalities: both femoral necks were short with flattened heads, acetabulum dysplasia, and secondary ossification center of femur dysplasia was mild to moderate hip dysplasia. Only one patient had undergone hip replacement surgery for hip dysplasia. That patient required varisation osteotomies of both femoral necks. Other characteristic findings included brachydactyly and scoliosis [14]. A double-layer patella seems to be specific but not essential to MED-4 and when present, separates the condition from the dominant forms of MED caused by mutations in COMP, COL9, and Matrilin-3. In the present study, the presence of short stature, coxa plana, brachydactyly, abnormal gait, and scoliosis in Patient 1 led to the clinical diagnosis of MED-4. Very recently, the abnormal gait and coxa plana were noticed in the younger brother, Patient 2, at the same age as his elder sister did at 6 years. To verify the cause of the disease, we suggested their parents to authorize a genetic analysis using an available capture array, which covers 363 genes related to bone diseases, including COL1A1, COL1A2, COL11A1, SLC26A2, COMP, COL9A1, and other genes. Indeed, the younger brother harbored the same mutations in SLC26A2. We then suspected him to be at the early stage of MED-4, and we advised his parents to be cautious regarding the development of epiphyseal dysplasia in the future. Mutations in SLC26A2 are related to a wide range of phenotypes, depending on the residual sulfate transporter activity. These phenotypes range in severity from the very severe achondrogenesis type IB, atelosteogenesis type II, and diastrophic dysplasia to the relatively mild recessive MED-4. The most common SLC26A2 mutation reported in several studies is homozygous c. 835C > T (p.Arg279Trp) [15]. Karniski compared the sulfate transport activity of 11 SLC26A2 mutations in the Xenopus laevis oocyte expression system [16]. Their results indicated that the p.Arg279Trp mutation transported sulfate at a rate 32% that of wild-type SLC26A2, while some mutations had minimal residual sulfate transport function. Makitie et al. reported another [14]. Very recently, a patient from a Caucasian three-generational family with MED-4 was reported to be a compound heterozygote for the common mutation in SLC26A2 and a novel mutation, p.Ser522Phe, while her maternal grandfather was homozygous for the common mutation [15]. Using a skeletal dysplasia targeted NGS panel, two novel heterozygous SLC26A2 mutations were identified in Patient 1. These mutations, c.824 T > C (p.Leu275Pro) and c. 1198C > T (p.Leu400Phe), are located in the extracellular loop (between amino acids 263 and 296) and the cytoplasmic loop (between amino acids 399 and 420), respectively. Both mutations were predicted to be functionally deleterious and missing in the above-mentioned databases. However, no functional studies have been undertaken for both mutations, so we do not know exactly if these are severe or mild mutations. Sanger sequencing confirmed that these mutations in Patients 1 and 2 came from their parents, one mutation from each. Clinical features and genetic analysis suggested that in this Eastern Chinese family, both patients were also compound heterozygotes for two novel SLC26A2 mutations. In conclusion, we present two patients of MED-4 with evolving clinical and radiological features. Skeletal surveys, joint complications, and genetic testing of their parents were found to be essential to understanding the mechanism. Both patients were compound heterozygotes for two unreported mutations in SLC26A2, c.824 T > C (p.Leu275-Pro) and c.1198C > T (p.Leu400Phe).
2018-05-07T18:24:22.902Z
2018-05-03T00:00:00.000
{ "year": 2018, "sha1": "66c7142e3f43a38ef8050ecc896bcc31c77f536e", "oa_license": "CCBY", "oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/s12881-018-0596-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66c7142e3f43a38ef8050ecc896bcc31c77f536e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
223597864
pes2o/s2orc
v3-fos-license
Racialised professionals’ experiences of selective incivility in organisations: A multi-level analysis of subtle racism This article explores how racialised professionals experience selective incivility in UK organisations. Analysing 22 in-depth, semi-structured interviews, we provide multi-level findings that relate to individual, organisational and societal phenomena to illuminate the workings of subtle racism. On the individual level, selective incivility appears as articulated through ascriptions of excess and deficit that marginalise racialised professionals; biased actions by white employees who operate as honest liars or strategic coverers; and white defensiveness against selective incivility claims. On the organisational level, organisational whitewashing, management denial and upstream exclusion constitute the key enablers of selective incivility. On the societal level, dynamic changes relating to increasing intolerance outside organisations indirectly yet sharply fuel selective incivility within organisations. Finally, racialised professionals experience intersectional (dis-)advantages at the imbrications of individual, organisation and society levels, shaping within-group variations in experiences of workplace selective incivility. Throughout all three levels of analysis and their interplay, differences in power and privilege inform the conditions of possibility for and the continual reproduction of selective incivility. Introduction Owing to the global rise of right-wing extremist ideologies, diversity is now at a critical juncture (Nkomo et al., 2019). The promise of equality has been largely unfulfilled for marginalised employees in organisations, and for some of them inequalities may be intensifying. In particular, despite decades of equality legislation, workplace racism remains a particularly persistent problem (Dickens, 2007;Quillian et al., 2017;Seifert and Wang, 2018;Stainback and Tomaskovic-Devey, 2012). Racialised workers routinely experience bullying and harassment; denial of opportunities in recruitment and selection, training and development, network access and promotion processes; and receive lower performance ratings, pay and other rewards (e.g. Bertrand and Mullainathan, 2004;Fox and Stallworth, 2005;Greenhaus et al., 1990;Guest, 2017;James, 2000;Pedulla and Pager, 2019). Racism is endemic in organisations, and disadvantages all aspects of racialised employees' work lives severely, as evidenced by myriad reports (McGregor-Smith, 2017;Nwabuzo, 2017;Solomon et al., 2019). While racism exacts significant harm on its targets, it is also deeply corrosive to organisations. Research shows that increased racial diversity is associated with better organisational outcomes, ranging from greater market share to larger profits (Herring, 2009;Smulowitz et al., 2019). Firm-level racial diversity can serve as a strategic human resource that facilitates sustainable competitive advantage (Richard, 2000). The benefits of racial inclusion tend to be even more noticeable in top management echelons, as racially diverse management teams buttress corporate reputation, business innovation and performance (Andrevski et al., 2014;Miller and Triana, 2009). By contrast, negative diversity climate perceptions surrounding race are associated with absenteeism, reduced commitment and increased turnover intentions for minority workers McKay et al., 2007;Triana et al., 2015), which can indirectly depress organisational performance. Yet, ironically, racism continues unimpeded at all levels and across all job types, causing problems for both racialised workers and the organisations in which they operate. Race and organisation scholarship has long tended to bifurcate racism into two main categories of practice: blatant racism -explicit processes of race-based discrimination, and subtle racism -implicit processes of race-based discrimination (Pettigrew and Meertens, 1995). Conventionally, policy has focused on the relatively more overt forms of racism and its effects on racialised employees in different career stages and employment contexts. Yet, research has shown that subtle forms of racism are ubiquitous, and its impact ranges from career-damaging foreclosure of opportunities to significant dangers to employee well-being (Deitch et al., 2003;Van Laer and Janssens, 2011). When expressed through subtle means discrimination is hard to pin down, precisely because its gradual unfolding appears insignificant, but the chronic nature of low-level hits nonetheless can trouble targets considerably (Cortina, 2008;Cortina et al., 2013). Further, in cases of subtle discrimination, there is usually only a vague connection between perpetrators' unwelcome attitudes, behaviours and cues and a targeted employee's identity (Dipboye and Halverson, 2004;Offermann et al., 2014). The ambiguity inherent to subtly expressed prejudice can lead targeted employees to misrecognise discrimination as an outcome of personal shortcomings on their own part, which can cause significant distress, undermining confidence as well as performance (Salvatore and Shelton, 2007). Additionally, when discrimination occurs subtly, employees can find it difficult to seek remedies through organisational grievance procedures because of the elusive nature of the mistreatment. The lack of clear and obvious remedies can further deepen employees' sense of disempowerment, exacerbating the damage to careers and well-being (Jones et al., 2016). In this light, developing new paradigms of discrimination is vital as the workings of racism change to take on ever-subtler forms in organisations (Ogbonna and Harris, 2006). This article explores the workplace selective incivility experiences of racialised professionals (the term adopted by this article to underline the social constructedness of race). As a theoretical lens, selective incivility incorporates individual, organisation and society-level effects, and is thus a powerful multi-level framework that can delineate how racism can operate through subtle means as a layered reality (Cortina, 2008). In this sense, selective incivility is significantly more useful than alternative theories, such as Sue's (2010) micro-aggressions model, which is neither explicitly multi-level nor exclusively focused on subtle discrimination. Additionally, utilising selective incivility as a theoretical lens contributes to the diversity literature by offering a more integrative analysis of race discrimination. By taking a holistic approach, we address the oversimplifications common to the prevailing scholarly preference for single-level accounts of workplace racism (e.g. characterising discrimination as a problem of individual deviance or structural inequality only). Our study addresses three inter-related questions: How do racialised professionals experience selective incivility as shaped by: (a) individual-level phenomena (i.e. interactions with co-workers, superiors and customers/clients); (b) organisation-level processes (i.e. management and HR policies and practices); (c) society-level realities (i.e. social norms and ideology)? To answer these questions, the article undertakes qualitative research based on in-depth, semi-structured interviews. This methodological choice reflects a conscious effort to privilege the voices and perspectives of racialised professionals, as the elusive nature of subtle racism is most intelligible to the people who personally experience it in organisations. Subtle racism in the workplace The literature on subtle racism is comprised of explanations ranging from a focus on individual misbehaviours all the way to structural inequalities (Van Laer and Janssens, 2011). In general individual-level explanations attempt to understand the psychosocial processes that generate discrimination on the basis of race, and how to remedy them in order to ensure fairer interactions in social settings (e.g. Dovidio and Gaertner, 2000;Gaertner and Dovidio, 1986;McConahay, 1986;Sears, 1988;Sears and Henry, 2003). In this perspective, exploring the cognitive mechanisms behind discrimination is the primary mode of understanding and addressing racial bias in organisations and society. At the structural end of the spectrum, scholarly attention shifts to society-level phenomena that configure an unequal system of race relations conditioned by the dominance of specific group(s) over others, and inform discriminatory policies and practices (Bonilla-Silva, 2006;Essed, 1991;Feagin, 2006). Each level of analysis offers explanations for subtle racism that come with a variety of strengths and weaknesses, ultimately underlining the inadequacy of limiting the analysis to any specific level. Individual-level research focuses on psychological forces that shape attitudes and behaviours that express subtle racism. For this group of arguments intentionality behind perpetrators' actions can be equivocal or clear, and the targets may experience some ambiguity in attributing the causes of the abuse. Key conceptualisations of subtle racism, where perpetrators have relatively unambiguous intentionality, are symbolic racism (Sears, 1988;Sears and Henry, 2003) and modern racism (McConahay, 1986). Symbolic and modern racists stand on the conservative end of the political spectrum, with strong beliefs in competition and individualism, and rejection of redistributive efforts, either organisationally or in society more generally. Not surprisingly, symbolic racism tends to be associated with a lack of support for positive discrimination or positive action measures designed to ensure greater parity in organisations (e.g. Franchi, 2003). At the point of hiring, modern racists, supported by apparent business justifications from authority figures in organisations, can cause significant distortions favouring white employees (Brief et al., 2000). Additionally, symbolic and modern racists can exact harm upon minorities through greater workplace bullying (Fox and Stallworth, 2005). While subscribing to the legitimacy of indirectly discriminatory policies, symbolic and modern racists consider themselves non-racists because of the subtlety of their biases. Another individual-level explanation for subtle racism is aversive racism, where perpetrators have no conscious intention to engage in biased behaviour. In contrast to individuals who express symbolic or modern racism, individuals with aversive racism strongly believe in the liberal values of tolerance, fairness and equality, while still carrying unconscious biases that lead them to discriminate (Gaertner and Dovidio, 1986). Aversive racism occurs in multi-factor situations, where numerous reasons may account for the discriminatory behaviour, which helps perpetrators to justify biases by referencing other seemingly valid considerations. In this way, no clear discrepancy exists between the discriminators' positive self-image and negative actions, creating conducive ground for persistent bias (Gaertner and Dovidio, 1986). As Dovidio and Gaertner (2000) demonstrate in a US-based study, despite longitudinal declines in self-reported prejudice, participants have still made selection decisions that disfavoured black candidates in candidate pools populated by roughly equally qualified people. While aversive racism research mainly originates from the USA, its salience does not necessarily inhere in the North American context. For example, Hodson et al. (2005) show through UK experimental data that aversive racism remains intractable in legal settings, even when jurors have the benefit of procedural innovations that clarify how to carry out less biased evaluations. As aversive racism has garnered growing attention in recent years, organisations have started to focus on unconscious bias as a central diversity issue. Employees are now widely offered unconscious bias training, although research argues such individualised interventions are largely ineffectual (Noon, 2018). One of the fundamental problems with individual-level analyses of subtle racism is that they offer fragmented and decontextualised solutions. This individualistic approach unduly reduces subtle racism to a problem with specific employees or their abstract social-psychological categorisation processes that come from a place of innocence unmarked by harm ideation. Whether symbolic, modern or aversive in nature, the individualistic theorising of subtle racism pays insufficient attention to the organisational, institutional and systemic forces that constantly reproduce racial conflict between groups. Thus, such explanations fail to capture the domination of racialised employees through an unequal distribution of power, resources and rewards (e.g. Essed, 1991). Through micro-aggressions research, Sue (2010) attempts to account for racism by focusing on individuals' psychologies more expansively. Encompassing a spectrum from the blatant to the subtle end, his account of low-level slights ostensibly focuses on interpersonal encounters between perpetrators and targets of discrimination. Yet, he also signals that racialised minorities experience micro-aggressions against a noxious social backdrop of inequality (Sue, 2010). Micro-aggressions exert an oppressive material effect on targets, because they indirectly limit opportunities for ethnic minorities in terms of recruitment, selection, retention and promotion, which reinforces dominant groups' privileges (Sue, 2010). Additionally, the incessant and ambiguous nature of the attacks on the targets exhausts cognitive resources and degrades their work performance (DeCuir-Gunby and Gunby, 2016;Holder et al., 2015;Kim et al., 2019), intensifying material inequalities. Further, racial micro-aggressions lead ethnic minorities to experience weaker belonging in organisations (Lewis et al., 2019). Yet, although the micro-aggressions scholarship considers subtle racism as an issue that encompasses more than interpersonal effects, organisational and societal complexities remain a relatively more implicit part of the theorising. Moreover, utilising micro-aggressions could unduly complicate research focused solely on subtle racism, because it is a spectrum theory of blatant and subtle racisms expressed in micro-measures. Yet, Sue's (2010) ideas are instructive in this study because of his significant emphasis on racialised individuals' voices, and lived experiences, as the most appropriate data source in studying subtle racism. Located at the structural end of the literature, everyday racism is another prominent explanation of subtle racism. Everyday racism is a concatenation of practices unique to a racialised social system that hinges on the marginalisation of racialised groups. Here, normalisation of structural inequality entails a mix of denial of racism and subtly exclusionary moves perpetrated by the dominant group. Everyday racism works insidiously, slowly undercutting racialised minorities' job satisfaction and well-being (Deitch et al., 2003). Research shows that everyday racism becomes progressively more acute as employees go up the career ladder, and can take a wide range of harmful forms, such as negative stereotyping and problematisation of cultural differences (Van Laer and Janssens, 2011). While everyday racism nicely shows that subtle racism encompasses structural relations that reproduce workplace inequalities, it can unduly elide the active role of individuals in perpetuating interpersonal discrimination. Subtle racism has doubtless a significant structural dimension, but targets often experience subtle racism as propagated by specific perpetrators. The theoretical emphasis on structure alone risks distancing research from targets' personal experiences of discrimination. Taken together, the literature on subtle racism reveals clear evidence that majorities hold explicit or implicit biases against racialised minorities, which disadvantages stigmatised populations materially and symbolically in a plethora of social arenas including the workplace. Yet, neither individual-based explanations, nor structure-led accounts, delineate the multi-level dimensionality of racialised individuals' lived experiences in the workplace. The existing literature neglects holistic accounts that outline the layered complexity of how subtle racism works, and thus fails to account for why it persists. There is a pressing need for more nuanced understandings of subtle racism within the coordinates of a theoretical framework that can accommodate explorations that straddle across multiple levels of analysis. Turning to the multi-level concept of selective incivility is therefore a promising avenue for understanding subtle racism in all its complexity. Selective incivility as a multi-level framework of subtle racism Cortina (2008) developed the concept of selective incivility by combining insights from multiple literatures. In part, selective incivility draws from the general workplace incivility literature (Andersson and Pearson, 1999;Cortina et al., 2001;Pearson et al., 2001), which theorises antisocial work behaviours that appear with low intensity and ambiguous intentionality. Additionally, selective incivility contains ideas from subtle sexism (e.g. Jackson et al., 2001;Swim et al., 2004;Tougas et al., 1995) and subtle racism, such as symbolic racism (Sears, 1988;Sears and Henry, 2003), modern racism (McConahay, 1986) and aversive racism (Dovidio and Gaertner, 2000;Gaertner and Dovidio, 1986). According to Cortina (2008), with the ascendancy of social norms that decry blatantly discriminatory attitudes and behaviours, contemporary discrimination emerges through a subtle channel of expression, selective incivility, where perpetrators selectively target minorities for workplace incivility. To assemble her notion of selective incivility, Cortina (2008) subscribes to Andersson and Pearson's (1999: 457) broad definition of workplace incivility as denoting 'low intensity deviant behavior with ambiguous intent to harm the target, in violation of workplace norms for mutual respect. Uncivil behaviors are characteristically rude and discourteous, displaying a lack of regard for others.' According to Cortina, while organisations view general workplace incivility negatively, it does not attract the same penalties that discriminatory behaviours like blatant racism would. Blatant racism is illegal in most country contexts, and many organisations have adopted non-discrimination policies that refute blatantly racist behaviours as unequivocally wrong under a zero-tolerance agenda. Therefore, individuals who may have racial biases have strong incentives to cover their true intent by resorting to selectively propagated uncivil actions, disguised as general incivility, against minorities (Cortina, 2008). Equally, individuals who hold implicit biases may selectively expose minorities to incivility, while believing themselves to hold egalitarian values and construing their actions as non-racist (Cortina, 2008). Finally, even persons who may not have explicit or implicit biases can potentially engage in selective incivility, if they model their behaviours after group-level norms in organisations with poor diversity climates, where minorities are marginalised (Cortina, 2008). While perpetrator intentionality exists on a continuum, the consequences for targets can be equally deleterious. Cortina's (2008) multi-level theorising involves three levels of analysis: individual, organisation and society. At the individual level, selective incivility hinges on affective factors, such as aversion against outgroup members, differences in esteem as well as cognitive factors such as social categorisation and stereotyping (e.g. Dovidio et al., 2001;Jones, 2002). At the organisational level, the practical force of organisational non-discrimination policies, extent of leadership support, and the nature of intra-organisational socio-cultural norms regarding diversity may shape selective incivility (e.g. Dipboye and Halverson, 2004). Finally, at the society level, Cortina (2008) propounds that a tradition of discrimination, differences in social roles and inter-group asymmetries in power can inform selective incivility (e.g. Operario and Fiske, 1998). In cases of selective incivility, the interplay between perpetrators and targets is an outcome of the interaction effects between racial prejudice at the individual level and the organisational climate surrounding subtle racism. Further, society-level race ideology and norms inform individual and organisation-level practices. In sum, Cortina (2008) theorises selective incivility as a multi-level concept that eschews wholly individual-level or structural explanations, because subtle racism resides in the imbrications of multiple levels of analysis and their interaction effects. More recently, Cortina et al. (2013) extended the scope of selective incivility by accounting for race and gender intersectionally, demonstrating that racialised women experience selective incivility more sharply and damagingly. Originally developed by black feminists in the USA, intersectionality is an analytical tool that traces the dynamic effects of systems of oppression that pertain to the multiple identities simultaneously held by people (Crenshaw, 1990;Davis, 2008;Hill Collins and Bilge, 2016). Historically, the intersectionality scholarship focused on gender, race and class mainly (Yuval- Davis, 2006), but the concept is sufficiently tractable to account for a far wider array of social positionalities (Healy et al., 2019). In this research, eschewing an exclusive focus on surface-level diversity (Harrison et al., 1998), we consider intersectional dimensions of selective incivility expansively. Additionally, we consider intersectionality as a multilevel construct, going beyond dominant characterisations of intersectionality as double jeopardy (i.e. additive view of (dis-)advantages at the individual level). Methodology Interviews with 22 ethnic minority professionals comprised the data for this study. The age range of the participants was 29 to 54. There were 12 male and 10 female participants. Five participants were Black Caribbean, four were Black African and one was mixed race (Black Caribbean and White). Eight participants had a South Asian background, and four participants had a Middle Eastern ethnic origin. The participants worked in a cross-section of industries, including communications, consulting, finance, engineering, healthcare, IT, law, local government, logistics, marketing, retail and tourism. Table 1 summarises the participants' key characteristics. Interviews lasted 45 minutes to 75 minutes, and were conducted by one of the authors in locations chosen by the participants. The interviews were digitally recorded and fully transcribed. The participants received assurances of anonymity and confidentiality, and they were clear that they could withdraw from the study at any time during or after the interviews, if they wished to do so. At all stages of the research process, from data collection to data analysis, we prioritised reflexivity (Alvesson and Sköldberg, 2009). The interview process benefitted from a sense of rapport and mutual understanding between the participants and the interviewer, a racialised academic. Nevertheless, we remained vigilant about our possible preconceptions, particularly against the risk of interpreting the data as influenced by our own experiences of exclusion. Additionally, as an all-male research team, we continually questioned our own awareness and views regarding the gender dimension of our research. The data collection process hinged on a combination of purposive sampling and snowball sampling, non-probability sampling methods that prioritise securing deep understanding over achieving representativeness. Purposive sampling is particularly apt for explorations of social phenomena in fine detail by recruiting participants with specific qualities that confer a tight relevance to the research questions at hand (Patton, 2002). Snowball sampling, which involves tapping into initial participants' social and work contacts to access further participants, is also a well-recognised approach for supporting participant recruitment in qualitative research (Browne, 2005). In this research, while slow and time-consuming, the sampling strategy, which utilised author networks and referral chains, yielded an eventual sample composed of informationally rich participants (Bryman and Bell, 2007). Empirical saturation shaped the sample size (Guest et al., 2006;Morse, 1994). By the 20th interview, saturation set in substantially, and by the 22nd interview, no significant new insights emerged. The interviews had an in-depth, semi-structured mode, which afforded participants influence over the question flow and content, prioritising their voices and perspectives. Open-ended questions corresponded broadly to our multi-level analysis (i.e. individual, organisation and society levels), as informed by Cortina's (2008) theorisation. Specifically, the questions explored participants' sense of how racially motivated selective incivility linked to issues of discrimination propagated by particular individuals such as colleagues, supervisors and clients/customers; organisational realities and processes that contribute to selective incivility; and wider societal forces that shape selective incivility organisationally and individually. Our interview process utilised the popular UK-based umbrella term BAME (Black, Asian and minority ethnic) that refers to all ethnic groups except for those socially constructed as white. However, we decided to adopt the phrase 'racialised professionals' eventually, as we recognised in our later discussions and reflections that our participants' comfort and identification with the term BAME varied considerably. Guided by interpretivist ontology, we privileged participants' views of the social world and the meanings they attached to systems, policies and practices that they encountered as carrying significant weight. In particular, the interviewing approach prioritised understanding the participants' perceptions of selective incivility through the tracing of their lived experiences (Sandberg, 2005), as subtle racism may not be objectively identifiable outside the targets' experiential knowledge. We opted for thematic analysis to dissect our data (Boyatzis, 1998). The analysis process began by each author independently carrying out active and repeated readings of the interview texts to immerse deeply into the data. During the active reading phase, we referred to the literature frequently in order to ensure we accounted for all dimensions of interest (Tuckett, 2005), but we also remained open to previously unreported, newly discoverable phenomena. Our initial code generation exercise focused on all data segments that seemed key to racialised professionals' selective incivility experiences (Braun and Clarke, 2006). After the coding and collating of the data, we grouped together the long list of initial codes, and translated them into themes. While each author independently coded the data, frequent team meetings helped clarify a convergent approach to coding and interpretation to ensure consistency and precision. During the team meetings, we also checked the themes for coherence vis-a-vis the patterns we detected, the relative separateness of each theme's content and the degree of match between the themes and data extracts (Patton, 2002). Where codes overlapped, we turned them into a common code, and we dropped some codes, as they did not correspond to any of the themes. The final step involved defining and labelling themes by clarifying what each theme denoted, how the themes interrelated and what particular dimensions of the data the themes encapsulated (Braun and Clarke, 2006). Table 2 depicts the data structure of our research. Experiences of selective incivility enabled by individual-level effects At the individual level, participant accounts of selective incivility correspond to three particular themes: ascriptions of excess and deficit to racialised professionals (Theme 1), white employees as honest liars vs. strategic coverers (Theme 2), and white employees' defensiveness (Theme 3). Theme 1: Ascriptions of excess and deficit to racialised professionals. The vast majority of the participants in this study pointed out negative stereotypes as the key basis of white employees' selective incivility against them. The racial stereotypes often stemmed from well-worn cultural misrepresentations of workers from particular ethnic origins. Despite variations in the typecasting, the stereotypes connoted significant convergences. Specifically, the interviews indicated that white employees ascribed characteristics of excess and deficit to racialised professionals, casting them as misfits for their jobs and work contexts. Racialised professionals' emotional states and work behaviours seemed problematic for falling outside an elusive 'normal' range: Whatever I do, it seems to come across wrong. Assertiveness is taken as being aggressive and rude, they think that my confidence is really just arrogance, my ambition looks pushy and annoying . . . I don't know my place, I'm always somehow off-kilter. (P7, Black Caribbean man, finance) Ascriptions of excess and deficit constituted selective incivility in their own right, but they also served as justification for further abusive behaviour towards racialised professionals, compounding distressing experiences of exclusion and marginalisation. For example, perceptions of racialised professionals' deviation from 'normality' meant that they were at the receiving end of sustained indignities perpetrated by white employees: When questioned further, the above participant explained that she never saw her manager behaving rudely towards white workers, but she could not always be sure her ethnic minority status motivated his animus. Such attributional uncertainty led participants to doubt themselves as undeserving impostors in their organisations. Additionally, many racialised professionals linked accumulative effects of selective incivility to slower career progression. White employees' ascriptions of excess and deficit onto the participants led to the devaluation of their contributions in the workplace. As a result, at key career stages, the participants felt overlooked and underutilised, subject to unfair foreclosures of opportunity: I've been passed over for a promotion repeatedly. I always ask for feedback . . . there's never any concrete answer. The only feedback I get is kind of patronising advice, and that's really upsetting for me, because it's really uncaring feedback . . . they're absolutely indifferent to how that rejection affects me. (P5, mixed race man, tourism) Ascriptions of excess and deficit led participants to feel rejected and devalued, creating a weak sense of organisational belonging. The sustained experiences of selective incivility sharpened feelings of injury and marginalisation as racialised professionals in organisations. Theme 2: White employees as honest liars versus strategic coverers. The majority of the participants considered the perpetrators of selective incivility as either honest liars or strategic coverers. White employees who fell into the honest liar category held a nonracist self-image, yet engaged in subtle racism. For the participants, interactions with honest liars were particularly confusing, because they observed evidence of rhetorical support contradicted by negligible real-life backing. Honest liars were unwilling to pay more than lip service to equality, and lacked the desire to challenge the unequal distribution of resources and rewards across different groups in organisations. Thus, even if no conscious racial animus marred honest liars' actions, their inconsequential support wore thin in significance, and betrayed a selective lack of solidarity: The people who build their persona on being inclusive . . . I don't doubt their intensions necessarily, but you talk about things like recruitment and retention and promotion, they put up a wall, you have people you think will support you react like 'this is not my issue', which is disappointing. (P12, Black African woman, consulting) Conversely, some participants believed that they worked with white employees who clearly held a self-image steeped in an ideology of racial superiority even if they did not make it explicit in interpersonal communication. Functioning as strategic coverers, these employees displayed a disturbing pattern of selective incivility marked by considerable malice. Oftentimes, strategic coverers were uniquely problematic, because such perpetrators seemed to question the participants' very existence and purpose in organisations. Strategic coverers' selective incivility expressed a profound underestimation of the participants' capacity to contribute to their organisations as productive professionals: The attitude is: 'You've achieved this much, what more do you want?' Like I didn't even deserve to have my current role, but they gave it to me, because I am the token minority, and a better role for me would be downright unfair to others. (P14, South Asian man, finance) Most of the participants expressed doubts about the change capacity of honest liars and strategic coverers. At least some of the time, targets recognised experiences of selective incivility exactly for what they were, yet they found it difficult to make a grievance claim against the perpetrators, reducing opportunities for change. Additionally, the participants believed perpetrators displayed complex, longstanding behavioural patterns, and one-size-fits-all, short-term training solutions (e.g. unconscious bias training) would likely have no effect. Theme 3: White employees' defensiveness. The majority of the participants indicated that speaking truth to power regarding race was a fraught process met by highly defensive responses from white employees. In particular, white employees had a tendency to display substantial unease in workplace interactions that questioned racial dynamics. When the participants pointed out instances of possible racism, they encountered disavowal as well as emotional blowback. Thus, white employees' defensiveness (cf. DiAngelo, 2018) often closed up racialised professionals' conversational space to challenge and address selective incivility. Resultantly, the participants felt inhibited from revealing the full extent of the difficulties they experienced to peers, managers, human resource officers and so on. The chilling effects of white defensiveness on racialised professionals' speech reinforced interpersonal domination, and shielded the majority from responsibility for inflicting harm. Some of the participants revealed that white employees tended to consider even the mildest challenges against racially inflected interactions as a personal affront, deploying a self-protective stance permanently. Thus, racialised professionals often faced not only the dismissal of the validity of their grievances, but also potential audience penalties from majority group members who invariably considered challenges as a threat rather than a learning opportunity: I used to make much more noise about racism, but I realised it didn't get me anywhere. That kind of proactive approach attracts more abuse. In my country, there is a saying, someone who tells the truth is driven out of nine villages. You're supposed to be just grateful and play nice. You're supposed to keep quiet, and pretend there's no racism, otherwise people get incredibly threatened. (P15, Middle Eastern man, tourism) In the participants' workplace experiences, race most often arose as a highly emotive subject. When participants complained of their discomfort with perpetrators, oftentimes the interpersonal conflict assumed a new dimension in which the perpetrators assumed the role of the victim. Specifically, white employees expressed strong negative feelings about any interpretation of racial undertones in their attitudes or behaviours: I had an exchange with someone because he made a very insensitive joke about refugees, and I said, 'As a woman of colour, your comments are really offensive to me. Can you be more sensitive in the future?' [He] looked so shocked. I felt like I wounded him. (P10, Black Caribbean woman, logistics) Overall, the participants thought that race was a taboo subject in their organisations. When the racialised professionals challenged perpetrators, they faced punitive and unpleasant emotional responses, which stifled their capacity to raise awareness in the workplace. Experiences of selective incivility enabled by organisation-level effects At the organisation level, organisational whitewashing (Theme 4), management denial (Theme 5) and upstream exclusion (Theme 6) emerged as enablers of selective incivility against racialised professionals in the workplace. Theme 4: Organisational whitewashing. Some participants thought their organisations were preoccupied with conveying the impression of valuing equality and non-discrimination rather than aiming to tackle subtle racism. Thus, organisational action often skewed towards legitimating the current order as essentially unproblematic. For example, managers and HR officers seemed to operate with the assumption that incidence of bona fide racism was relatively more rare and isolated, perpetrated by 'bad apples', rather than constituting a dominant feature of interpersonal interactions within an unequal organisational culture. Such an approach reinforced organisational beliefs about the suitability of the current racial hierarchies, and invisibilised them. Organisational whitewashing of widespread selective incivility hinged on downplaying racialised professionals' concerns: I raised a formal grievance . . . you hope and pray that HR is on your side in these things, but they decided that I was overreacting. There was no case, nothing actionable at all . . . you can be undermined for months, and if you complain, that's what they think. (P18, South Asian man, engineering) Some participants who lodged subtle racism claims faced questions about the validity of their perceptions, and the particular manner in which they reacted to subtle racism: I was responsible for a project with a few others who kept excluding me from the decision making . . . I was having a meeting with one of them, and he kept criticising me and making me feel like everything I did was shit. I raised my tone of voice, not like shouting, but I did speak more forcefully, of course how dare a black man speak like that? He was livid, like suddenly his face got so white, and he just upped and left . . . They made it out that my behaviour had been threatening, and he felt unsafe . . . I was the one who had to make a grovelling apology to a man who couldn't stand the sight of me. (P2,Black Caribbean man,finance) According to the participants, individuals who benefited from white privilege ran organisations in accord with a racially differentiated distribution of resources and rewards. Yet, the legal, reputational and stakeholder pressures required organisations to give the appearance of equal opportunity. Thus, while organisations offered diversity training, such steps reflected the underlying organisational need to express compliance. The corrective measures that appeared scrupulous or introspective often watered down the problems or hid the depth and breadth of selective incivility. The training-centric diversity management strategies controlled the agenda for change and negated calls for organisational transformation: Whenever there's a problem, be it racism or any other type of bias, HR and the board have the same strategy to put out the fire . . . they throw some more diversity training at the problem . . . Nothing other than superficial stuff, and that really has very, very little impact . . . everybody's really in on it, it's common knowledge . . . the real concern is managing our reputation. (P21,South Asian woman,communications) In the participants' view, organisational efforts to contain the appearance of selective incivility created an environment of concealment, where organisations not only selfcongratulated, but also actively disguised the extent of workplace race discrimination. Thus, incidents that required reflection and change were trivialised, and organisational disciplinary mechanisms did not deter the perpetrators of selective incivility. Theme 5: Management denial. Some participants believed that middle and senior management had a tendency to deny the incidence and extent of selective incivility in organisations. The denialist management approach worked by suggesting that targets perceived racism unwarrantedly, often because managers tended to consider selective incivility as generic lapses in interpersonal conduct without any untoward racial content. When racialised professionals expressed alarm and frustration about an incident, the management response was to construct the event in question as happenstance or a misunderstanding, invalidating the viability of targets' claims. In this sense, management denial involved a consistently positive reading of perpetrators' motives: Somebody would have to physically attack me, like screaming racial slurs at me, you know, actually punching me in the face before management would say, 'oh yes, that was racist, we need to do something about that'. I don't think my manager is capable of acknowledging anything less clear-cut than that as racism. It's clear to me who they would give the benefit of the doubt. (P11, Black Caribbean man, finance) Moreover, the denialist approach indicated managerial arrogance, insinuating that racialised professionals were disgruntled trouble-makers. Such recriminations positioned grievances as imagined and vexatious, as well as damaging to workforce cohesion. As a result, some participants felt alienated and disenchanted at work. The long-term disempowerment through denial of subtle racism reduced some participants' work motivation and performance: At my previous work, I suffered a lot with racism, but whenever I tried to seek support, it was swept under the carpet by the team leader, who always had an unsupportive answer to give to everything I mentioned . . . I was gaslighted the entire time, and that slowly killed off all the motivation I had going into the job, and then obviously my performance went downhill, which then made me the problem employee. (P19, Middle Eastern man, finance) Overall, management denial had a silencing effect on racialised professionals, and an emboldening effect on perpetrators. Managers seemed to be the most immediate authority figures and the first port of call when selective incivility occurred. Thus, managerial failure to acknowledge and intervene in support of racial equality severely reduced some participants' capacity to seek redress, and increased the likelihood of employee turnover. Theme 6: Upstream exclusion. The participants in this study almost universally complained of a racial diversity shortage in the upper echelons of organisational management. Numerical balancing efforts still remained limited to employees without management authority or the lowest rungs of the managerial hierarchy. The progressive lack of racial diversity as the participants looked up the organisational hierarchy reduced their confidence about the viability of voicing concerns. They also faced new dilemmas as they went up the organisational hierarchy even if moderately. On the one hand, as they assumed middle management roles, selective incivility seemed to increase and become more visible to them, especially because at the middle levels of the organisations they found themselves surrounded by a super-majority of white employees. On the other hand, challenging other management-level employees could spell career costs: The participants also worried that the numerical under-representation of race diversity in the higher echelons created a boardroom knowledge deficit, and the top management remained uninformed about how widely and deeply selective incivility afflicted their organisations. On the one hand, racialised professionals thought that the cumulative effects of selective incivility stunted their career course or would likely substantially limit their access to the executive level. On the other hand, existing organisational decision-makers, whose almost overwhelming whiteness our participants frequently raised as a critical problem, also lacked a good understanding of how selective incivility operated, which served as a major impediment to thwarting subtle racism from the top: All the top positions are occupied by white men, and the simple fact is that my career has a ceiling . . . The leadership doesn't have a good understanding . . . because the people at the top who can do more to change the culture and who can make a difference that way, that doesn't include anyone who walks in my shoes. (P22,Middle Eastern man,IT) According to the participant narrations, the upstream exclusion was responsible for the dearth of effective interventions and corrective measures. The lack of top management insight into racialised professionals' situated experiences led to piecemeal interventions that failed to address selective incivility adequately. Experiences of selective incivility enabled by society-level effects Some participants linked growing societal intolerance (Theme 7) over the past decade to a heightened frequency and severity in their experiences of selective incivility in the workplace. Theme 7: Growing societal intolerance. The participants had a keen awareness of the coarsening of public rhetoric regarding ethnic minorities in the past decade. They referred to how racial inequalities were deepened by government policies, including the prevent policy that enlists educators to report students suspected of terrorist sympathies, the hostile environment policy designed to drive away undocumented immigrants, the Windrush deportations that wrongly denied citizenship rights of immigrants from Caribbean countries and so on. For the participants, the erosion of community goodwill appeared to be linked to a decline in the climate of inclusion within their organisations. The negative changes in the wider social context surfaced in their interactions in and outside their organisations: Additionally, the participants referred to societal debates over the inclusion versus exclusion of ethnic minorities (e.g. the Brexit process) as polarising the community climate. Some participants thought societal polarisation reinforced already existing negative images white people utilised in their interactions with racialised professionals. In this way, societal forces were not simply a static background condition for the participants' lived experiences and career trajectories, but an evolving constellation of events and processes that sharpened workplace discrimination: We live in a racially divided country, health stats, education stats, labour market stats . . . race is written all over our society . . . if you think about racism and it's again on the rise . . . yeah I think it makes a difference to how we work together or fail to work together rather. (P16, South Asian woman, local government) Some participants believed that white employees tended to perceive organised life as a zero-sum game, where minimising disadvantage for racialised professionals would require a levelling off or lessening of white privilege, which they opposed. Perceived threats to white privilege in organisations intensified subtle racism, mirroring rising societal disagreements over the distribution of valued resources across groups. Experiences of selective incivility enabled by interaction effects across multiple levels In the interviews, the imbrications of society, organisation and individual-level effects emerged as intersectional (dis-)advantages (Theme 8) that enabled divergences across participants' experiences of selective incivility. Theme 8: Intersectional (dis)-advantages. Some participant accounts pointed to interconnections between societal narratives and ideas regarding race, organisations' internal reflection of society's race-inflected divisions and the unfair differential treatment of racialised professionals at work. Importantly, participant experiences of intersectionality (i.e. situated effects of holding multiple identities simultaneously) did not reflect a straightforward advantage or disadvantage in relation to selective incivility. For example, some participants emphasised the ever-shifting implications of gender and race: It's also not a case of you're a woman, so it'll always be worse for you. It depends on the company culture, and the people there . . . In some industries, it really pays more to be a man . . . I've also seen cases where it's actually worse if you're a man. (P4, Black African woman, tourism) Instead of double jeopardy, most participants described the interplay of gender and race as unpredictable and context-dependent. They believed racialised professionals' gender could have different degrees of salience across various job types, organisational settings, occupations and industries. Additionally, some participants mentioned that differences in class privilege, types of accent and skin tone influenced their standing, underlining the complexities of intersectional (dis-)advantages. Interestingly, some participants believed the intersections of racialised status and racialised professionals' attitudes and worldviews significantly affected the extent of selective incivility they experienced in organisations. Specifically, they considered that racialised professionals whose views about workplace racism tallied with organisational race orthodoxies held an advantage over racialised professionals who protested racism: I knew a trainee manager who sang from the same hymn sheet on racism as the average white employee . . . 'racism is a problem of the past, it's not relevant anymore'. It's frustrating to see a brown person getting it so wrong. I thought, what's he playing at? . . . he was an honorary white man, which worked for him career-wise. (P1,Middle Eastern man,retail) Expressing conformity with the existing race-inflected organisational norms and power structures seemed to confer limited and conditional immunity upon some racialised professionals. The participants thought that racialised professionals who monitored themselves by carefully curating a workplace identity that signalled an exclusive focus on individual career progression instead of solidarity avoided even deeper selective incivility, which potentially generated further silencing effects. Discussion Our research expands the conceptual scope of Cortina's (2008) selective incivility, while also confirming key elements of the framework. Our findings regarding frequent ascriptions of excess and deficit to racialised professionals reflect the powerful grip of negative social categorisations that shape white employees' perceptions of difference. As we explain, while selective incivility appears as merely momentary expressions of denigration, it also has cumulative consequences on working lives and careers, because it has the long-term effect of casting racialised professionals as interlopers in organisations. Additionally, honest liars versus strategic coverers exemplify how outgroup aversion and differential esteem, either consciously or unconsciously deployed, tend to be widely expressed in organisations. The ubiquity of honest liars and strategic coverers are important in understanding the continuity of selective incivility, despite the increasing social rejection of racism in rhetorically inclusive organisations. Introducing white defensiveness (see also DiAngelo, 2018), we extended Cortina's framework at the individual level by accounting for the affective responses of white people to claims of selective incivility. White defensiveness explains why calling out racism can ironically seem more offensive than subtly racist behaviours, and thus it points to an important mechanism unaccounted for by the selective incivility framework. The silencing effects of white defensiveness, as revealed by this study, form a critical aspect of how selective incivility operates at the individual level. At the organisation level, our research shows that organisational whitewashing and management denial within a context of upstream exclusion of racialised professionals render organisational policy, norms and leadership practices advantageous to white employees, providing fertile ground for selective incivility. While our findings map onto organisation-level concerns in Cortina's (2008) framework with some degree of fit, they signify the need to account for the centrality of organisational power hierarchies, which ensures selective incivility is easy to deploy and resistant to change. It is possible to deny subtle racism or whitewash it because power holders are white, and they have the capacity to define reality in accordance with their interests. By contrast, most racialised professionals wield significantly less power and influence in organisations, which reduces their ability to make legitimate claims about selective incivility that would ensure perpetrators are deterred. Interestingly, at the society level, Cortina (2008) recognises the importance of existing power asymmetries across groups to the operation of selective incivility at work. Yet, our study shows that power differentials generate workplace pecking orders, which enable selective incivility. Thus, power realities should be key to the organisation level of analysis also in selective incivility. As we demonstrate, at the level of society, dynamic macro-level changes in equality norms and practices are endogenous to organisational policies and employee actions (e.g. Tatli et al., 2017). In our research, the worsening social exclusion in the national context further fuelled selective incivility in the workplace, underlining a strong relationship between community diversity climate and organisational race relations (see also Ragins et al., 2012). Building dynamism into the society-level effects in Cortina's (2008) framework, we demonstrate how selective incivility has time and place dimensions. Society-level effects are long-lived factors, but they do not statically shape how selective incivility operates, because they are subject to significant historical forces that can become highly salient within specific periods (e.g. the current anti-immigrant culture in the UK). Thus, we show that the nature and implications of selective incivility emerge within particular contexts, and contextual sensitivity must guide the theory's empirical application. A further theoretical contribution of our article is to trace the interplay of all three levels of analysis that constitute selective incivility through our intersectional approach. Although the levels of analysis in selective incivility are analytically separable, they are not mutually exclusive. All levels bleed into each other in some respects, and a strong interplay transpires across the three levels of analysis. While the notion of selective incivility recognises that different levels interact, it does not specify the crucially important interaction effects that emerge at the interface of individual, organisation and society levels. One useful step in this direction was Cortina et al.'s (2013) incorporation of intersectionality of race and gender into selective incivility, but their operationalisation of intersectionality mainly resided at the individual level. As we argue, intersectional theorising aims to capture the interplay of different levels of analysis in shaping the social experiences of individuals who carry multiple identities simultaneously. Additionally, Cortina et al. (2013) emphasise how selective incivility intensifies for workers with two stigmatised identity categories (i.e. double jeopardy). However, we argue, the intersections of identities people hold based on their structural locations can confer upon them both advantages and disadvantages in a complex and counterintuitive manner. In our research, intersectional (dis-)advantages carry nuances in relation to gender and race owing to additional dimensions, such as accentism and national origin, colourism, occupational/industry affiliation and so on. We also reveal that intersectional (dis-)advantages accrue through the interplay of surface-level characteristics (i.e. gender and race) and deep-level characteristics (i.e. capacity for self-monitoring/conformity; individualistic vs. solidaristic outlook). Thus, our study highlights the multidimensionality of racialised professionals' selective incivility experiences, superseding double jeopardy to emphasise within-group variety and complexity. Practical implications Our research demonstrates the incompleteness of single-level analyses in understanding subtle racism in organisations. Studies that consider subtle racism at the individual level (e.g. Dovidio and Gaertner, 2000;McConahay, 1986) or the structural level (e.g. Bonilla-Silva, 2006;Essed, 1991;Feagin, 2006;Van Laer and Janssens, 2011) have the advantage of parsimony. Yet, their strong emphasis on a particular layer of reality risks overconfidence in the efficacy of partial solutions. In this light, the practical insights and policy implications of this study point to a radical rethink of the existing approaches to diversity and inclusion training. All of our participants were in professional roles situated in rhetorically inclusive organisations that relied extensively on superficial modes of diversity training (e.g. unconscious bias training) to address discrimination issues. Yet, our findings chime with recent research that questions the widely held HR view that employees are responsive to unconscious bias training, and would modify their attitudes and behaviours when they realise they have subtle biases (Noon, 2018). That some of the participants thought diversity training was a means of organisational whitewashing revealed the depth of their distrust in convenient solutions that reduce a complex, multilevel organisational issue to the individual level of deviant employees. This study highlights the need to eschew temporally limited, substantively superficial training, which has a poor track record in creating meaningful change (Bezrukova et al., 2016). Training should prioritise perspective-taking activities that build a nuanced awareness of the obstacles faced by different racialised people, and goal-setting activities that track trainees' progress over time against measurable actions (Lindsey et al., 2017). Engaged, critically reflexive training requires safe spaces for workers to ask difficult questions, have uncomfortable conversations and reflect on and problematise white privilege proactively. Importantly, training regimes that acknowledge the complexity of subtle racism, and accordingly eschew piecemeal solutions in favour of intensive activity over time, require significantly greater financial resources for their design and implementation. Furthermore, the power and standing of racialised employees need to be enhanced significantly in organisations to address selective incivility effectively. All available solutions within the law need to be deployed to dismantle racial hierarchies in organisations, including targeted calibration of selection and promotion to enhance racialised employees' numerical representation in the upper echelons (Noon, 2010(Noon, , 2012. In addition, using leadership development programmes, mentoring and coaching opportunities, and external recruitment consultants may help to resolve the existing power imbalances between white and racialised employees in organisations. As well, action plans that articulate publicly declared key performance indicators for race equality need to be in place to track over time organisational performance in eradicating racial hierarchies. Voluntaristic organisational action can be supplemented by government regulation that would mandate the publication of board and senior management composition of organisations by ethnicity, and publication of ethnicity pay gap data (McGregor-Smith, 2017). Mirroring the multi-level nature of discrimination, both bottom-up and top-down approaches can be mobilised simultaneously to create meaningful change (Groutsis et al., 2014). Challenging white supremacy that historically defined organisations requires recognising the variegated problems faced by racialised workers. Voice opportunities for disadvantaged workers are critical for resistance to inequalities (Ozturk and Rumens, 2015). In order to amplify racialised professionals' voices that have long been ignored or silenced, organisations need to put in place forums to explore workplace racism. It is also important to ensure racialised workers' full involvement in strategic planning for organisational diversity and inclusion, secure significant racial diversity in organisational decision-making processes and make race equality a key feature of all organisational projects. Complexity and opacity of selective incivility make it highly impervious to change. However, comprehensive training along with diffusion of power across all groups, and effective voice mechanisms designed for empowerment, can disrupt the continuing dehumanisation of racialised professionals in organisations. Conclusion This research has highlighted the multi-level nature of subtle racism by utilising the theoretical lens of selective incivility. We are mindful that the results apply to a particular group, racialised professionals, who may face different work realities as compared with lower-income racialised workers with insecure employment contracts. Future studies focused on the intersection of class and race hierarchies may reveal additional complexities in multi-level workings of selective incivility. Indeed, utilising selective incivility in relation to other less well-studied bases of inequality, such as age, disability, sexual orientation, gender identity, religion and so on in a wider range of contexts, including non-western settings, can reveal further the nuances of subtle discrimination that pervade work organisations today. Additionally, future research could deploy selective incivility to scrutinise hierarchies within the category of whiteness as well, expanding the topical reach of the concept into a wider array of social groups, which equality scholarship tends to miss as possible targets of racism (e.g. Eastern European workers in the UK). Moreover, a critical next step to broaden insights from selective incivility research to date is to undertake qualitative research that encompasses both majority and minority groups, exploring the full range of ambiguities and ambivalences that help reproduce conditions for selective incivility. Selective incivility has enormous emotional, psychic and materials costs for racialised professionals, but it likely entails significant indirect costs for organisations as well, through declining motivation and increased turnover intentions, as our research signals. Quantitative research can help measure such indirect costs, providing further evidence of the full extent and kinds of harm selective incivility generates for not only racialised workers but also their organisations. The concept of selective incivility is extraordinarily powerful, and we strongly advocate its wider and diverse use in equalities research. Not only can selective incivility offer a more refined view of how new modes of discrimination are shaping human relations in the context of work, but also it can provide fresh and novel insights to tackle subtle racism, one of the most fundamental problems in organisations today. As we conclude our article, the heinous racism that led to George Floyd's death in the USA and the disproportionate impact of the Covid-19 pandemic on racialised communities (often identified as BAME in UK terminology) weigh heavily on our minds. These realities are painful reminders of how organisations and societies have spectacularly failed to live up to the principle of equality for all. As organisations may experience contractions in business activity as a result of the pandemic, it is especially important to remain vigilant that racialised workers do not experience the brunt of the fallout (e.g. redundancies, promotion or salary freezes, reduction in hours, etc.). Despite the numerous organisational problems our research lays bare in the context of subtle racism, we remain optimistic that transformative change is possible. A powerful ray of hope at this time of great upheaval has been the emergence of a new alliance politics between young people from divergent backgrounds as they protest in favour of race equality in the main streets of world cities. It is our hope that the growing public awareness of racism's unacceptable toll will help inspire a renewed solidarity to push race equality into the epicentre of organisational life. Funding The authors received no financial support for the research, authorship and/or publication of this article. Open-ended interview questions relating to multiple levels. Level Questions Individual How are you treated as a BAME professional by your colleagues at work? How do customers/clients treat you as a BAME professional? How are you treated as a BAME professional when you visit different organisations or client sites? What are some of the racial stereotypes, if any, that inform the way white people interact with you at work? What is the organisational diversity climate like in your workplace? What is the current state of race relations in your organisation? Can you tell me about any situations in which issues faced by BAME workers have been discussed within your organisation? What is the impact of your organisation's policies on your experience of race at work? How do HR systems and processes respond to incidents of subtle racism in your organisation? How do managers respond to complaints about subtle racism in your organisation? What is the organisational leadership's position on race issues in your workplace? What are your organisation's explicit and/or implicit preferences for the advancement of greater racial equality in your workplace? How do different sub-groups of the BAME workforce experience subtle racism in your organisation? Society What is the impact of racist incidents and scandals that occur in this society on how racism operates in the workplace? How does the history of racial discrimination in this society shape the organisational policies and individual practices you encounter at work? What are the implications, if any, of race-based power differences across groups in society on the work lives of BAME professionals? How do current public attitudes regarding race shape, if at all, your experiences as a BAME professional in your workplace? What impact, if any, does the recent rise of right-wing ideologies and politics in society have on your experiences as a BAME professional at work? What is the impact of societal views on different groups of BAME people on how they experience subtle racism in the workplace?
2020-10-17T13:29:57.651Z
2020-09-22T00:00:00.000
{ "year": 2020, "sha1": "139f2c444be39758e03996100a69c45f8879dd36", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0018726720957727", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "267ee4fb832981b7673c3fbcf3d1e8e5ea3105b5", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
235766306
pes2o/s2orc
v3-fos-license
Operational Insights From the Longitudinal Analysis of a Linear Accelerator Machine Log Purpose This study aimed to perform a longitudinal analysis of linear accelerator (linac) technical faults reported with a cloud-based Machine Log system in use in a busy academic clinic and derive operational insights related to linac reliability, clinical utilization, and performance. Methods We queried the Machine Log system for the following parameters: linac type, number of reported technical faults, types of fault, number of faults where the linac was disabled, and estimated clinical downtime. The number of fractions treated and monitor units (MU) delivered were obtained from the record and verify system as metrics of linac utilization and to normalize the number of reported linac faults, facilitating inter-comparison. Two Varian TrueBeam C-arm linacs (Varian Medical Systems, Palo Alto, CA), one Varian 21iX C-arm linac (Varian Medical Systems, Palo Alto, CA), and one newly installed Varian Halcyon ring gantry linac (Varian Medical Systems, Palo Alto, CA) were evaluated. The linacs were studied over a 30-month period from September 2017 to March 2020. Results Over 30 months, comprising 677 clinical days, 1234 faults were reported from all linacs, including 153 “linac down” events requiring rescheduling or cancellation of treatments. The TrueBeam linacs reported nearly twice as many imaging, multileaf collimator (MLC), and beam generation faults per fraction, and MU as the Halcyon. Halcyon experienced fewer beam generation/steering, accessory, and cooling-related faults than the other linacs but reported more computer and networking issues. Although it employs a relatively new MLC design compared to the C-arm linacs and delivers primarily intensity-modulated treatments, Halcyon reported fewer MLC faults than the other linacs. The 21iX linac had the fewest software-related faults but was subject to the most cooling-related faults, which we attributed to extensive use of this linac for treatment techniques with extended beam-on times. Conclusions A longitudinal analysis of a cloud-based Machine Log system yielded operational insights into the utilization, performance, and technical reliability of the linacs in use at our institution. Several trends in linac sub-system reliability were identified and could be attributed to either age, design, clinical use, or operational demands. The results of this analysis will be used as a basis for designing linac quality assurance schedules that reflect actual linac usage and observed sub-system reliability. Such a practice may contribute to a clinic workflow subject to fewer disruptions from linac faults, ultimately improving efficiency and patient safety. Introduction The medical linear accelerators (linacs) are complex systems that in addition to core systems related to beam generation and cooling to manage the associated thermal loads, including sub-systems like the multileaf collimator (MLC) [1] and volumetric imaging [2]. The modern linacs can also be dynamically linked to ancillary systems, such as robotic couches for remote patient positioning, and real-time motion monitoring with infrared, electromagnetic, and optical systems [3]. Integration of positioning and verification systems with the linac allows for precise and accurate treatment techniques such as intensity-modulated radiation therapy (IMRT), stereotactic radiosurgery (SRS), hypo-fractionated high-dose stereotactic body radiation therapy (SBRT), and respiratory-gated delivery. For intensity-modulated fields, the monitor units (MU) per treatment have also increased along with the complexity of techniques. The healthcare equipment manufacturers are regulated and are incentivized to design the systems with 1 1 1 1 multiple redundancies and interlocks to prevent linac performance deviations that can result in patient harm. However, linac faults routinely occur in normal operation without any harm to the patient. Some faults are designed to take the linac offline until the fault can be investigated and maintenance performed. Other faults are either quickly cleared or cleared after some action by clinic staff. This downtime can still have a clinical impact, particularly in a busy clinic, which can lead to treatment delays and time pressures on staff. Adapting to workflow disruptions in healthcare delivery can also increase the risk of error [4]. In radiation therapy, the errors originating in workflow disruptions may occur when moving the patients to a different linac or moving the staff who may not be as familiar with a patient's setup and accessories [5]. With an aging population as well as the increased therapeutic use of radiation for malignant and benign diseases, the number of patients being treated is expected to increase [6]. Thus, the hardware and software performance demands on the linacs are increasing and their overall performance reliability will continue to be an important issue. Minimizing linac downtime and ensuring reliable patient throughput is therefore essential to providing high-quality radiation therapy. Furthermore, as radiation therapy becomes more widely available in low-and middle-income countries (LMICs), high throughput and reliability of radiotherapy equipment will be critical as service and support infrastructure may not be readily available [7]. Our institution employs a cloud-based electronic equipment fault reporting system (Machine Log) to alert the staff and vendor support of the incidence of faults, interlocks, and other technical issues originating with the linacs and other treatment-related equipment in the clinic. The Machine Log is compatible with any vendor or treatment equipment. The previous work has demonstrated that the Machine Log systems can provide operational efficiencies and reduce clinic disruptions, potentially improving patient safety and the quality of care [8]. The purpose of this work is to use a longitudinal analysis of Machine Log data for our institution's linacs to perform a comparative evaluation of technical reliability and workflow disruptions, accounting for linac usage and patient throughput. Materials And Methods Our institution's principal clinic is equipped with four linacs of varying designs, ages, and clinical roles. These linacs are operated in a hospital setting and are supported by a vendor-provided technical help desk as well as field service engineers, although the latter are not based on site. Linac descriptions As shown in Table 1, these four linacs comprise two TrueBeam™ models (Varian Medical Systems, Palo Alto, CA) and one 21iX (Varian Medical Systems, Palo Alto, CA), all equipped with the Millenium™ 120-leaf MLC (Varian Medical Systems, Palo Alto, CA). The TrueBeam linacs are equipped with MV portal imaging, and kVcone-beam computed tomography (CBCT). The 21iX was originally a 21EX model without imaging that was later upgraded with on-board imaging (OBI) several years prior to this study. Our institution is also equipped with a Varian Halcyon™ 2.0 (Varian Medical Systems, Palo Alto, CA). Halcyon is a novel ring-gantry linac configuration designed for streamlined operation and high patient throughput. Our institution was among the first to introduce the Halcyon into clinical service, installing the system in mid-2017 and treating the first patient in September 2017. This relatively new linac design consists of a single energy, magnetronbased 6 MV flattening filter-free (FFF) linear accelerator mounted on a rotating gantry with a beam stopper and electronic portal imaging device (EPID) mounted opposite. The maximum dose rate is 800 MU/minute. The EPID acquires either orthogonal MV image pairs or MV-cone-beam computed tomography (CBCT) images for setup verification. The MV-CBCT field of view is 28 cm x 28 cm with a 0.22 mm pixel spacing. At the time of commissioning and first clinical use, Halcyon did not have kV imaging; this capability was added approximately one year later in an upgrade to the Halcyon 2.0 standard. The kV-CBCT field of view is 28 cm by 28 cm with a 0.336 mm pixel spacing. Halcyon also has a novel MLC design that shapes the beam with a staggered, dual-layer leaf bank system and no collimator jaws. The upper layer consists of two banks of 29 leaves and the lower layer consists of two banks of 28 leaves, for a total of 114 leaves of 1.0 cm width. The maximum field size is 28.0 cm by 28.0 cm. The full gantry assembly is enclosed within a carbon fiber shell, surrounding a wide bore through which patients are positioned for the treatment. The linac rotates within the shell at a speed of four revolutions per minute (rpm), which is approximately four times faster than the rpm of conventional Varian linacs with a C-arm gantry design. The mechanical accuracy of the Halcyon MLC has been shown to be equivalent to the TrueBeam MLC while operating at a higher gantry rotation speed [9]. New technology can be associated with periods of higher service and support demands as installation and manufacturing issues become apparent, unknown design deficiencies appear, and staff build familiarity and develop competencies [10][11][12]. To yield operational insights into the initial technical reliability of Halcyon, we divided the 30-month evaluation period into two phases: the entry-into-service period of six months and subsequent 24 months of routine clinical use. Halcyon has been the subject of numerous evaluation studies, including characterization of the novel MLC design [13], acceptance and commissioning [14][15][16], output calibration [17], radiographic imaging [18][19], integration with surface imaging [20], and clinical safety [21]. Over the total evaluation period of 30 months, the clinic treated an average of 150 patients per day (range: 130-190). The typical treatment fraction durations were 15 minutes for conventional treatments and 30 minutes or longer for special procedures, such as stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT), delivered exclusively on the TrueBeam linacs or total body irradiation (TBI) and total skin electron (TSE) therapy delivered exclusively on the 21iX linac. Linac Machine Log system Although the most modern linacs have some ability to automatically record faults and other errors internally, this data is not readily available to clinic staff, often lacks context about how the linac was being used at the time and the clinical impact of the fault. Our institution employs a cloud-based Machine Log system to record all the technical faults arising from the linacs, regardless of significance or downtime, in the form of a user-generated context-rich report. The Machine Log also serves as an institution-wide event notification and management system for disseminating and storing linac fault reports. The clinic staff including radiation therapists and medical physicists can report faults via a simple webpage with a box for free-text entry, predefined checkboxes for event categorizing, and radio buttons to indicate linac status. Machine Log reports also quantify linac downtime and provide the option to electronically notify the vendor and request technical help desk or field service engineer support. The system is designed for ease and speed of entry so that reports can be distributed in near real-time. The user review of Machine Log report entries is performed from any web browser including mobile devices and can be filtered by linac, date/time, linac status (i.e., down), event resolution status, and is keyword searchable. The events can be updated or marked as resolved with follow-up information, such as repair information. An in-house browser-based version of the Machine Log system was used for the first 18 months of data collection before we transitioned to the functionally similar cloud-based total QA Machine Log system (Image Owl, Inc., Greenwich, NY) for the remaining 12 months of the study. Linac Machine Log longitudinal analysis Between September 2017 and March 2020, the Machine Log was queried for the following parameters: linac type, number of reported faults, fault descriptions, and clinical downtime. From the reported faults and associated descriptions, events that resulted in a "linac down event" (LDE) were identified and separately tabulated. An LDE was defined as a fault with the linac that interrupted or prevented the patient's treatments and required cancellations or adjustments to the linac schedule. The faults that result in an LDE can be classified as such in the Machine Log through a dedicated checkbox, or if the keywords "MACHINE DOWN" or "LINAC DOWN" were present in the free text of the report. The reported downtime per fault was estimated by clinical staff from the time of the initial report to return to service. The record and verify (R&V) system (Aria Oncology Information System, Varian Medical Systems, Palo Alto, CA) was used to determine how many patients were rescheduled or canceled after an LDE or other significant fault. The R&V was also queried to cross-validate the staff estimates of clinical downtime by inspecting the appointment schedule at the date and time of an LDE. The fault reports were classified into one of eight categories representing major sub-systems present on each linac: imaging systems (including kV, MV, and CBCT), MLC, beam generation (including vacuum), cooling systems, software/network, patient support, accessories (including electron applicators, light field/optical distance indicator and related setup aids), and miscellaneous faults falling outside these categories. The start of the analysis (September 2017) coincided with the post-commissioning deployment of the new Halcyon linac into routine clinical use. Therefore, in addition to the full 30-month analysis, we performed a separate analysis on a six-month subset of data (September 2017-March 2018), where we expected higher technical support demands for Halcyon as quality issues were resolved and staff became familiar with the system's abilities and demands. During this period, clinical use was also constrained by anatomic sites as users and clinicians developed more confidence in the system's capabilities. The results of this subset analysis were compared with the overall 30-month analysis, where clinical use of Halcyon expanded both in the patient numbers and sites treated, and several hardware and software upgrades were made. The goal of this comparison was to identify the technical reliability and patient throughput trends and other operational insights related to the deployment of a new treatment unit. Linac clinical utilization The R&V system was queried for the number of fractions treated and MU delivered as metrics of linac utilization. To enable a meaningful cross-comparison between linacs with varying patient loads, clinical use profiles, and capabilities, the number of reported faults per sub-system was normalized to the number of fractions delivered and to the number of monitor units (MU) delivered. For ease of data presentation, the metric was expressed as the number of reported faults per 1000 fractions delivered and the number of faults per 1x10 6 MU delivered. We considered the number of fractions delivered by a linac to be a reasonable surrogate for the level of demand placed on sub-systems that are used intermittently in a treatment fraction, i.e., before and after the treatment, such as the patient support, imaging, and control interfaces. Conversely, we considered the number of MU delivered by a linac to be a surrogate for the level of demand placed on the core systems of the linac that are in more continuous operation during a treatment fraction, such as cooling, beam generation, steering, and collimation. Other metrics of linac clinical utilization and patient load were calculated relative to the number of fractions treated and MU delivered, respectively, in order to account for the different clinical uses of the linacs. Results Halcyon performance was evaluated at six months from the start of clinical service and at 30 months from the start of clinical service. During the initial six-month period, the anatomic sites treated with Halcyon were limited as staff developed familiarity with the capabilities and performance of the linac. Initial six-month evaluation of Halcyon clinical use The initial six months of Halcyon clinical service comprised 119 clinical days. In this time, Halcyon delivered 20.1% of all fractions treated in the clinic, reflecting the reduced usage compared to the other linacs in the clinic ( Normalized per 1000 fractions delivered by each linac (Figures 1a-1h), the TrueBeam linacs reported more faults related to beam generation, imaging, and the MLC (Figures 1a, 1c, 1d). Halcyon reported fewer MLCrelated faults than the TrueBeam linacs, but more than the 21iX (Figure 1c). The 21iX reported the most faults related to cooling systems ( Figure 1b) and the fewest faults related to the patient support/pendant (Figure 1e). Accessories faults were almost exclusively reported by the 21iX (Figure 1f). Halcyon reported the most software/network-related faults ( Figure 1g) and patient support-related faults (Figure 1e). Over the subsequent 24 months (30 months since introduction), the absolute numbers of faults on Halcyon increased coincident with the period of expanded clinical utilization and an associated increase in patient throughput (Table 3). However, when normalized to the number of fractions delivered (Figures 3a-3h), the trends in Halcyon reliability continued, with a lower frequency of faults compared to the other linacs in beam generation, MLC, imaging, and accessories (Figures 3a, 3c, 3d, 3f). Faults related to the patient support ( Figure 3e) and software/network seen in the initial six-month period continued to be relatively higher than with the other linacs (Figure 3g). FIGURE 3: Comparison of the rate of linac faults per 1000 fractions delivered, organized by linac and sub-system fault over 30 months of clinical service (inclusive of the initial six-month Halcyon evaluation period). When normalized by the number of MU delivered by each linac (Figures 4a-4h), Halcyon demonstrated the lowest frequency of faults in systems related to output including beam generation ( Figure 4a) and MLC (Figure 4c), and was roughly equivalent to the TrueBeam 1 for the lowest frequency of faults in cooling systems (Figure 4b). When normalized by MU, the high frequency of cooling-related faults on the 21iX is further emphasized (Figure 4b). Fault frequency for sub-systems not directly related to output became more similar across linacs when normalized by MU instead of by fraction, including the patient support and software/network (Figures 4e, 4g). Discussion In radiation oncology, much emphasis has been placed on creating the treatment delivery systems and clinical processes that maximize equipment, human, and organizational reliability. The American Association of Physicists in Medicine (AAPM) report of Task Group 100 recommends risk assessment methods, such as Failure Modes and Effects Analysis and Fault Tree Analysis, to identify and develop the strategies to mitigate the risk of errors arising from human and systems failures [22]. The real-time equipment fault reporting and electronic incident learning systems are another way of mitigating risk by providing a feedback mechanism for quality improvements to both clinical users and vendors. Electronic fault reporting databases such as Machine Log can be analyzed for linac performance and technical reliability and yield insights into machine performance and clinical throughput. The longitudinal analysis of linac faults presented here revealed several notable observations and trends regarding linac technical reliability and clinical availability. Halcyon initial clinical deployment and operational insights While some new equipment installations can be problematic, despite having a new linac design including novel MLC design and user interface, Halcyon was at least as reliable as conventional Varian linac designs during the initial six-month clinical deployment. Over a subsequent 24 months of clinical service (30 months total), Halcyon delivered more fractions and more MU per reported faults and LDE than either the TrueBeam linac or the 21iX design. Halcyon faults related to beam generation, beam steering, and cooling systems occurred less frequently than with the other linacs. This may be because the Halcyon employs a single photon energy design compared to the multiple energies and electron/photon modes available on the C-arm linacs. At the same time, Halcyon delivered the most MU, in absolute terms and relative to the number of fractions and reported faults, suggesting a core systems design that is robust to high treatment workloads. Halcyon also reported the fewest MLC faults per MU and per fractions delivered. This is also noteworthy as Halcyon employs more intensity modulation than the C-arm linacs, potentially placing more mechanical stress on the system. The technical reliability of the Halcyon MLC is a finding generally consistent with Wang et al., who reported that the mechanical accuracy and reproducibility of the Halcyon MLC was superior to that of the TrueBeam MLC [9]. A major design goal of Halcyon was high patient throughput with reduced service and support requirements [23]. Our findings indicate that Halcyon had less clinical downtime and fewer LDE compared to linacs with comparable patient throughput, demonstrating that the Halcyon facilitates high patient throughput with reduced service and support requirements. From the low clinical downtime relative to the number of faults, it can also be deduced that most Halcyon faults were quickly recoverable either by clinic staff acting independent of vendor support or with the assistance of remote technical support, and did not require long waits for vendor field service engineers to arrive on the site to resolve the event. While laudable in their own right, meeting these design goals would also make Halcyon suitable for deployment in resource-limited environments, including LMICs. A comparative analysis of linac reliability in the United Kingdom and Africa found that linac failures were significantly higher in LMICs, with the MLC and cooling systems responsible for the highest rates of faults [7]. In this work, Halcyon reported the lowest rate of MLC faults and among the lowest rate of cooling system faults. A further benefit of having a highly reliable treatment unit in the clinic is shown in Figure 2, where all linacs exhibited a decrease in faults per fraction delivered after the initial sixmonth deployment of Halcyon had ended. We attribute this phenomenon to the reduction in patients treated per day on all non-Halcyon linacs as our institution gained confidence and experience with Halcyon and increased the patient load on the new linac. This redistribution of patient load reduced the demands placed on all other linacs and their subsystems, which may have contributed to a lower frequency of technical faults. Imaging systems on Halcyon proved to be reliable compared with the other linacs at our institution. In the initial six months of service, the Halcyon reported fewer imaging-related faults than the conventional linacs, however, this can be partly explained by the relatively simple imaging systems in place over this period compared to the other linacs in our department. Halcyon had MV-CBCT only at the time of commissioning and during the first year of operation. Kilovoltage X-ray imaging capabilities including CBCT were not added to Halcyon until after one year of service when the system was upgraded to the Halcyon 2.0 standard. However, even when adding additional imaging systems, Halcyon still reported fewer imaging-related faults per 1000 fractions delivered ( Figure 3d) and per 1x10 6 MU when compared with the other linacs (Figure 4d). We believe that this may be partly due to the less complex mechanical design of the kV imaging system on Halcyon, in which the source and detector are fixed to the gantry and do not require multiple moving arms to position them for imaging as with the C-arm linac OBI system, a common source of faults in our experience with the other OBI-equipped linacs in our clinic. The software and networking-related faults were one area where Halcyon reported more faults than the other linacs in our institution, both during the initial six-month introduction to the clinical service and over the full evaluation period (Figures 3g, 4g). As both the maturity of the software and the experience of staff in managing the network configuration grew over time, faults of this nature were expected to decrease. Indeed, some software-related issues were addressed in the upgrade to Halcyon 2.0 when kV imaging hardware was also added. However, software-related issues persisted. Such faults may depend on information technology infrastructure and may be institution-specific; a comparison with other user experiences is needed to determine if software and networking faults are a common problem with Halcyon. The Halcyon also exhibited a recurring issue with the patient support that became notable in the second year of service but was resolved through a vendor-provided hardware fix. TrueBeam operational insights The imaging systems (MV portal imager and kV OBI system) were the greatest source of faults for both TrueBeam linacs in our clinic. Imaging system faults were also more frequently reported compared to the 21iX and Halcyon linacs (Figures 3d, 4d). At our institution, the TrueBeam linacs are primarily used for imaging-intensive procedures, such as SRS and SBRT. It remains unclear if the higher frequency of imaging system faults on TrueBeam linacs is a factor of their design or their more intensive clinical utilization compared to the other linacs. Beam generation was the next most common source of faults (Figures 3a, 4a). TrueBeam linacs have multiple energies including FFF modes, and these linacs are employed to deliver respiratory-gated treatments. The associated demands of these capabilities on the beam generation system could explain the higher beam generation fault rates compared with Halcyon, which is a single energy (6 MV) machine. However, the TrueBeam linacs reported 34 faults related to electrons or photon energies other than 6 MV over the 30-month evaluation period, out of a total of 764 TrueBeam linac faults (4.4%). This low number is not unexpected as most fractions in the IMRT era are delivered with 6 MV photons. However, the low proportion of non-6 MV-associated faults suggests that having multiple energies was not a significant factor in this analysis of linac technical reliability. The MLC on the TrueBeam linacs was the third greatest source of faults per fraction delivered ( Figure 3c). Again, this likely reflects their primary clinic role in delivering high-dose intensity-modulated treatments, such as SRS and SBRT. Indeed, when normalized by MU delivered, the frequency of MLC faults becomes more comparable to the 21iX linac, which has the same MLC design (Figure 4c). 21iX operational insights The 21iX is the oldest linac in our clinic and is the only linac used to delivery TBI and TSE treatments. These two techniques require extended beam-on times and generate correspondingly higher thermal loads in the linac. Therefore, reliable cooling systems are essential to treating patients without interruption. It follows that reliable cooling systems are essential to treating patients without interruption. This analysis showed that the greatest source of faults for the 21iX originated with the cooling systems when corrected for usage (Figures 3b, 4b). The 21iX reported the fewest software and network-related faults, which are expected due to the older and less complex control software in use at the treatment console relative to the TrueBeam linacs, as well as the maturity of the network configuration relative to the more recent Halcyon installation (Figures 3g, 4g). When not used for TBI or TSE, our institution's 21iX primarily delivers threedimensional conformal radiation therapy (3D-CRT), corresponding to lower demand on the MLC and imaging systems compared to the TrueBeam and Halcyon linacs. Consequently, the 21iX exhibited a low rate of imaging and MLC-related faults. Relative to the other linacs, the patient support system on the 21iX reported the fewest faults per 1000 fractions delivered ( Figure 3e). As with imaging and use of the MLC, we believe this is because the treatments delivered on the 21iX rely less on automatic couch movements for image-guided radiation therapy (IGRT), or do not use the couch at all to support the patient, as our institution's current technique for delivering TBI uses an extended source to surface distance (SSD) with the patient on a gurney. When normalized to MU delivered, the 21iX patient support exhibited similar technical reliability as the other C-arm-based linacs (Figure 4e). Towards usage-based quality assurance intervals In this study, linac performance and reliability were evaluated by comparing linac fault frequency as a function of delivered MU and of fractions treated to account for the different clinical use profiles of each linac. Expressing fault frequency in terms of MU and fractions delivered also yielded possible explanations for the observed technical reliability of some sub-systems. For example, the 21iX was the oldest linac in our clinic, but reported the fewest faults in absolute numbers of the conventional linac designs and delivered the most fractions without a fault. However, when evaluated per MU delivered, the 21iX delivered the fewest number of MU per fault. This suggests that for this linac, although some ancillary subsystems had high technical reliability, core linac systems such as beam generation, collimation and cooling may require increased maintenance attention. It has been proposed that age could define the optimum maintenance strategy for medical equipment, with more technologically advanced systems receiving predictive maintenance strategies while older systems receive more traditional preventive maintenance [24]. The vendor service engineers are a limited resource in the clinic, often supporting multiple machines at multiple institutions within a geographic region. To address this challenge efficiently, many industries have shifted their preventive maintenance (PM) strategies from calendar-based to those based on a continuous device lifespan-based schedule, which is both more effective and more efficient [25]. The development of similarly efficient quality assurance (QA) strategies to maintain linac quality and safety will allow clinical medical physicists to contribute more directly to patient care including online adaptive therapy [26], patient communication and education [27], and other new roles as described by the Medical Physics 3.0 initiative [28]. The results of this study suggest another QA strategy where intervals are shifted away from calendarbased (i.e., weekly, monthly, annual) schedules towards ones informed by the generalized linac fault trends and known sub-system problem areas for each linac and vendor, such as can be identified through a Machine Log analysis. For example, the analysis of our institutions' linacs indicates that a customized PM and QA strategy for our clinic could emphasize imaging systems and the MLC on the TrueBeam linacs and place less emphasis on MLC QA for the Halcyon. The relationship between linac faults and MU and fractions delivered indicates that the intervals for this new QA approach should be based on usage not the time since the last test. Our analysis indicated that faults per MU and per fraction decreased when relative patient loads on the linacs were reduced, suggesting that a usage-based PM and QA schedule is more appropriate than one that is purely calendar-based. A robust QA strategy would be to combine a usage-based approach with a daily approach that is targeted at monitoring subsystems that are more likely to fail over time. The potential benefits to clinic efficiency could be investigated by repeating the analysis presented in this work, and quantifying linac performance before and after implementation of the new QA strategy. Limitations A limitation of this work is that the Machine Log system presented here is based on voluntary staff participation to compose and distribute reports. As a result, there is the possibility that some faults were simply cleared and not reported, particularly in time-constrained situations. However, our institution has a high participation and reporting rate thanks to the streamlined and user-friendly web browser interface for reporting and following up on linac events. Automatic fault reporting by the linac would be a solution to this problem, and this data could potentially be used to predict downtime [29] or the need for QA [30]. However, automatic reporting by the linac itself on faults would not provide the rich contextual information that is often beneficial in troubleshooting present and future linac problems. Another limitation of this work is that it compares different linac designs with further differences in the ages, capabilities, and clinical uses of each linac. We addressed these differences by comparing faults from similar sub-systems, and by normalizing fault frequencies with the numbers of fractions delivered and MU delivered. From Table 3 and Figures 3, 4, there was no obvious correlation between linac age and frequency of faults by MU or by a fraction. Furthermore, we observed only a small proportion of faults (6.3%) attributable to features not found on all linacs, thus we did not consider capability differences alone to be a factor in the superior or inferior reliability performance of some linacs. A more general limitation of this analysis is that the findings reflect a single institution's experience with these linacs and the fault trends for their respective sub-systems. A comparison against the experience of other institutions deploying these linac designs will be required to validate these findings and help determine which fault trends are installation-specific or possibly attributable to the linac design. The Machine Log analysis demonstrated here is a generalizable method for analyzing linac performance and can be applied to other institutions and delivery systems to investigate their comparative performance. Conclusions A longitudinal analysis of data from a linac fault reporting system identified several trends and operational insights related to linac reliability, performance, and clinical usage. Over a 30-month routine use in a busy academic institution, the new Halcyon linac design proved at least as reliable as the well-characterized conventional linacs in our clinic and may be more reliable in some critical sub-systems related to beam generation, multi-leaf collimation, and imaging. The longer-term evaluation and comparison with other Halcyon installations would be necessary to determine if these findings and trends persist. The C-arm linacs exhibited varying technical reliability trends by sub-systems including imaging, MLC, and cooling. Other trends became apparent when normalized by MU delivered and fractions treated. The future study will employ this analysis to investigate if shifting QA and preventive maintenance schedules away from calendar-based intervals and towards intervals that reflect actual linac usage and observed technical reliability contributes to a clinic workflow that is subject to fewer equipment-related disruptions, improving efficiency and patient safety. Additional Information Disclosures Human subjects: All authors have confirmed that this study did not involve human participants or tissue.
2021-07-09T05:19:42.326Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "796239ce2a17d7ce2e30ad5e53f6bf31864a772d", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/56509-operational-insights-from-the-longitudinal-analysis-of-a-linear-accelerator-machine-log.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f47e8719de62490191ed528899991e08a1e83556", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
208971987
pes2o/s2orc
v3-fos-license
Compositional Grounded Language f or Agent Communication i n Reinforcement Learning Environment In a context of constant evolution of technologies for scientific, economic and social purposes, Artificial Intelligence (AI) and Internet of Things (IoT) have seen significant progress over the past few years. As much as Human-Machine interactions are needed and tasks automation is undeniable, it is important that electronic devices (computers, cars, sensors…) could also communicate with humans just as well as they communicate together. The emergence of automated training and neural networks marked the beginning of a new conversational capability for the machines, illustrated with chat-bots. Nonetheless, using this technology is not sufficient, as they often give inappropriate or unrelated answers, usually when the subject changes. To improve this technology, the problem of defining a communication language constructed from scratch is addressed, in the intention to give machines the possibility to create a new and adapted exchange channel between them. Equipping each machine with a sound emitting system which accompany each individual or collective goal accomplishment, the convergence toward a common ‘’language’’ is analyzed, exactly as it is supposed to have happened for humans in the past. By constraining the language to satisfy the two main human language properties of being ground-based and of compositionality, rapidly converging evolution of syntactic communication is obtained, opening the way of a meaningful language between machines. they look promising, much of their success is for the moment a result of intelligently designed statistical models based on static, passive, and mainly supervised regimes ultimately trained on large static datasets [23][24] . In this context, the use of NLP for creating a seamless and interactive interface between humans and machines will continue to be a priority for today and tomorrow increasingly cognitive applications. Developing an artificial sophisticated language system [25][26][27] is mandatory for machines to become more intelligent and to gain the ability to learn like humans [28] . In parallel, it could also open important insights into questions related to development of human language and cognition. It immediately comes out that, if communication is to be created from first principles, the only way to do it is from necessity. In other words, approaches learning to imitate human language from examples, even if useful, only capture structural and/or statistical relationships. They are completely missing language functional aspect and do not provide any answer on why language exists [29][30][31] . More precisely, they do not relate language as it stands with the reason of its existence, which is a successful coordination mean between humans. Here to replicate as much as possible for machines what was occurring for humans, it is claimed that if such language is created from scratch, it should necessarily develop in an environment giving this emerging language the two main properties of human one, ie to be grounded-based and compositional [32][33][34] , even if other models can be conceived [35][36][37][38][39] . Present project aims at developing a new technique of NLP by fostering the emergence of a compositional and ground-based language amongst the machines exactly like it already exists between humans. Human language is grounded because it is based on experience in the real world. If a dictionary defines words with other words, a human will associate a word with sensory-motor experience (sight, touch etc.). As the agents will use words to describe concepts in their environment, a ground-based language will emerge rather easily. Compositionality in a language consists in that the meaning of a complex expression is determined by the meaning of its constituent expressions and the rules used to combine them [40][41] , in the idea of adding up individual words to create a meaningful sentence altogether. The emergence of compositionality in a language only happens if the number of describable concepts (or learning events) is larger than the vocabulary size by following Zipf law which states that the frequency i of occurrence of a word is inversely proportional to its position i. [42] Environment Description In this work, a physically simulated two-dimensional environment consisting of N agents and M landmarks in continuous space and discrete time is considered. Both agent and landmark entities inhabit physical positions p in space and possess descriptive physical characteristics, such as color and shape type, see In addition, agents can move in the environment and direct their gaze to a location l. Denote X the physical state of an entity. To facilitate the emergence of previous two language properties, the environment considered here is a cooperative and partially observable Markov game [43] , which is a multi-agent extension of a Markov decision process. The cooperative setting allows formulate the problem as a joint minimization across all agents, as opposed to minimization-maximization problems resulting from competitive settings. The reward for each agent i (i=1,…,N) is given at time t by where i X(j) is the observation of entity j physical state in agent i reference frame, and O is the set of observations made by all agents. Finally system dynamical equations are given in [45] . In present multi-agent environment, each agent for simplicity will act by sampling actions from the same stochastic policy  on the same sets of observations O and of actions A (which are the features/parameters in the model). Optimum Policy Determination The problem is to find the common policy  maximizing their share return r(.,. [46] . This is achieved by using the Gumbel-Softmax categorical re-parameterization, which gives an end-to-end differentiable model. Assuming a random variable with a categorical distribution with class probabilities j (j=1,..k), the Gumbel-Max trick [47] provides a simple and efficient way to draw samples z from a categorical distribution Compositionality with Dirichlet Process As seen on Figure 2, the plot of learning events number vs vocabulary size is closely approximated by Zipf law. The approximation by "Zipfian" law is not surprising as it describes the patterns of many natural situations (word frequencies in language, city populations, websites traffic, .. is a probability distribution the range of which is itself a set of probability distributions [48] . In present case, the word the agent is going to choose in its vocabulary is the base variable, and another distribution is applied on it to describe how the random variable is distributed. The less a symbol is uttered, the lower is the probability that it will be sampled in the future and the higher is the penalty. This mechanism fosters the use of (5) where α is the Dirichlet hyper-parameter determining the probability to pick a new word, Nx is the number of times utterance x has been picked and N the total number of utterances. The reward is given by : most "popular" words, hence it leads to limit the size of used vocabulary implying the emergence of compositionality. Results and Discussion In present experience, an environment is first built up with one agent and two landmarks, see Figure 1. language grammatical correctness is not scalable in order to make sure that the language developed by the agents is itself correct. Even for a proportional increase of words learned by agents, the number of sentences they could correctly produce rises exponentially with unavoidable consequences on computational power requirements, calling for a specific and more restricted type of language for the agents.
2019-11-22T01:31:33.627Z
2019-11-15T00:00:00.000
{ "year": 2019, "sha1": "6aa7f2061916ec7127c47902aad2bd7d1a88eac2", "oa_license": "CCBYNC", "oa_url": "https://en.front-sci.com/index.php/jai/article/view/56/39", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "dba98d0afc32148c1c05e08571fbc21cb1ae1b3a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219726399
pes2o/s2orc
v3-fos-license
Creating the Internet of Augmented Things: An Open-Source Framework to Make IoT Devices and Augmented and Mixed Reality Systems Talk to Each Other Augmented Reality (AR) and Mixed Reality (MR) devices have evolved significantly in the last years, providing immersive AR/MR experiences that allow users to interact with virtual elements placed on the real-world. However, to make AR/MR devices reach their full potential, it is necessary to go further and let them collaborate with the physical elements around them, including the objects that belong to the Internet of Things (IoT). Unfortunately, AR/MR and IoT devices usually make use of heterogeneous technologies that complicate their intercommunication. Moreover, the implementation of the intercommunication mechanisms requires involving specialized developers with have experience on the necessary technologies. To tackle such problems, this article proposes the use of a framework that makes it easy to integrate AR/MR and IoT devices, allowing them to communicate dynamically and in real time. The presented AR/MR-IoT framework makes use of standard and open-source protocols and tools like MQTT, HTTPS or Node-RED. After detailing the inner workings of the framework, it is illustrated its potential through a practical use case: a smart power socket that can be monitored and controlled through Microsoft HoloLens AR/MR glasses. The performance of such a practical use case is evaluated and it is demonstrated that the proposed framework, under normal operation conditions, enables to respond in less than 100 ms to interaction and data update requests. Introduction The Internet of Things (IoT) paradigm has already been considered for multiple applications in fields like smart appliances [1,2], precision agriculture [3], smart healthcare [4,5] or smart buildings and cities [6]. In fact, some reports point at a huge growth on IoT deployments, with 75 billion devices in operation by 2025 [7]. Many IoT systems make use of web-based or smartphone apps to monitor and control them, which are adequate for most situations, but the latest advances of Augmented Reality (AR) and Mixed Reality (MR) can bring the interaction with such systems to a new level. The first pioneering AR/MR developments were carried out in the 1960s [8,9], but they were not practical for being used massively. It was not until the 1990s, after Boeing documented the first industrial AR applications [10], when the field regained interest by industry and academia [11]. However, the big push for AR/MR came from the German government in the late 1990s, when it funded the ARVIKA project, which involved some of the largest industrial manufacturers (e.g., Airbus, EADS, BMW, Audi, VW, Daimler, Ford) in the development of mobile AR/MR systems [12,13]. Luckily, during the last years AR/MR devices have improved significantly thanks to advances on electronics, computing technologies and wireless communications. Such advances have also led to decrease the AR/MR commercialization price, which has sparked the interest of new consumers and industries that are transforming their processes through the Industrial IoT (IIoT) or Industry 4.0 paradigms [14]. One of the most relevant challenges that AR/MR developers currently face when interacting with IoT devices is the technology heterogeneity, which makes it difficult to implement even simple interactions. Thus, most AR/MR frameworks use technologies that differ remarkably to those used for the development of IoT device software. Moreover, the skills usually required by AR/MR and IoT experts are also different, what complicates the communication between both fields. In order to allow AR/MR applications to interact with the surrounding IoT devices, it is necessary to develop communication mechanisms that enable exchanging data in the same 'language', so that the involved devices understand each other. Unfortunately, the development of such mechanisms is not straightforward due to the previously mentioned heterogeneity issues, the computational hardware limitations (especially in the case of resource-constrained IoT devices) and the development restrictions imposed by AR/MR frameworks. To tackle the aforementioned problems, this article includes the following contributions. First, it is faced the challenge of interconnecting very different types of systems by proposing an AR/MR-IoT framework that allows for integrating AR/MR platforms and IoT devices easily through the use of open-source standard communication protocols. Such a framework enables implementing ubiquitous and scalable applications in a flexible way and eases their configuration. In addition, the proposed framework is designed to allow for implementing on top of it complex functionality in a simple way. Such a development is eased by providing the framework software through GitHub together with implementation examples [15]. Moreover, this article follows a hands-on approach in order to lead practitioners and developers in the development of future AR/MR-IoT frameworks. Finally, a number of experiments are presented in order to evaluate the performance of the framework in terms of Quality of Experience (QoE). The results show that the framework is really fast, being able to perform interaction and data updates in less than 100 ms under normal operation conditions. The rest of this article is organized as follows. Section 2 reviews the most relevant related works. Section 3 describes the design and implementation of the proposed AR/MR-IoT framework. Section 4 details how the framework can be used for implementing an energy monitoring and control application based on a smart power socket and Microsoft HoloLens smart glasses. Finally, Section 5 is dedicated to the evaluation of the response latency of the framework, while Section 6 is devoted to the conclusions. Related Work In the last years, novel smart environments have been conceived under what has been denominated as Extended Reality (XR). In such spaces, digital, physical, and social layers are strongly intertwined enabled by technologies such as AR/MR, Virtual Reality (VR) and immersive displays [16]. Such technologies offer more natural ways to exploit human perception and interaction [17], and provide enhanced data use [18]. Thus, they represent an opportunity to rethink the way in which people collaborate and interact nowadays. Examples of such collaborations are recent novel applications for telemedicine [19], shared mission planning scenarios [20] or for enhancing astronaut procedure execution [21]. The IoT is a field where protocol, standard and hardware compatibility problems are common due to the diversity of manufacturers and developers [22]. The mentioned problems also affect AR/MR interfaces, but only a few academic articles have addressed them. An example is the work of Croatti et al., who detail the main components of an infrastructure for supporting what they called the Web of Augmented Things (WoAT) [23]. Another example is presented in [24], where a comprehensive review on the main challenges for enhancing AR/MR-IoT integration is provided. In such a review the authors emphasize the issues that arise when managing and visualizing distributed object-centric data; when accessing, controlling and interacting with IoT objects; and when carrying out interoperable content exchanges. Moreover, the same authors presented in [25] a proof-of-concept demonstrator to enhance shopping experiences. Another demonstrator is presented in [26], but in such a case for monitoring stress analyses of metal shelving. It is also worth mentioning other implementations like the one detailed in [27], where the authors studied the problems related to the automatic discovery of IoT sensor devices and proposed to solve them by using relational localization mechanisms. Regarding AR/MR-IoT frameworks, four relevant works can be cited. The first one is described in [28], where the authors present an AR framework whose tracking algorithm depends on the context, which is detected automatically by the IoT infrastructure. The second relevant work is detailed in [29]: the researchers propose to integrate Microsoft HoloLens smart glasses [30] with an IoT platform through a RESTful API. The third work focuses on integrating IoT devices and virtual elements with the aim of creating 4D experiences under the concept of Virtual Environment of Things (VEoT) [31]. Such real-time experiences enable users to interact with IoT networks through avatars and holograms that represent digital twins of sensing devices. Such a VEoT concept is demonstrated at the XReality lab of Texas State University [32], which provides simplified examples of object recognition, IoT-to-hologram interaction and network topology visualization. Additionally, it should be noted that other works focused on facing the specific challenges that pose client-server infrastructures, IoT resource-constrained devices, mobility, interaction methods, data management or ad-hoc 3D streaming. One of such works is described in [33], where the authors propose a hybrid framework that facilitates 3D texture streaming, which provides a high quality service with limited power consumption. Quality of User Experience User eXperience (UX) is a key factor to consider when designing an AR/MR-IoT framework since UX will determine to a great extent the likelihood of user adoption. Quality of Experience (QoE) evaluation has been an active research topic for the past years in the AR/MR field. Academic works range from the QoE evaluation of basic AR systems [34,35] to more sophisticated approaches that evaluate the use of HMDs [36,37] with sophisticated physiological metrics [37,38]. For instance, in [34] the authors compare the UX between an AR manual workstation and a video for assembly assistance. Their aim was to determine the probability of user adoption of an AR interface for such a use case. The obtained results show a better task performance using an AR interface, a reduction in both time to completion and in the number of errors. An animated AR system was also compared with a paper-based manual system as a guidance tool for an assembly task in [35]. Such a work conducted formal experiments with 50 participants, whose cognitive workload and learning curve were measured when using the system. The results of the experiments showed that the AR solution achieved less errors, shorter time to completion and lower total workload. Furthermore, the learning curve of the trainees improved significantly. One of the most complete evaluations of an HMD was performed in [36]. Such a work evaluated the QoE of an AR system for solving a 3 × 3 Rubik's Cube (an NP-complete problem with 43 quintillion possible states) and compared it with when using paper-based instructions. The experimental methodology included the analysis of both implicit and explicit QoE metrics for the use case of task assistance. Such a methodology involves six phases: sampling, screening, baseline, training, practice and testing. The task performance was objectively measured using task success rate and time to completion. Furthermore, in order to infer emotional state during task realization, the implicit metrics of electrodermal activity (EDA), heart rate, skin temperature and facial action units were collected. Finally, with respect to explicit metrics, it was used a Likert scale questionnaire to subjectively report QoE under six variables: utility, usability, interaction, aesthetics, efficiency and acceptability. Emotional state was reported by participants using a Self-Assessment Manikin (SAM) questionnaire to reflect their emotions upon task realization. The SAM questionnaire involves three scales, one for each dimension of affect (i.e., arousal, valence and dominance). The results highlight the potential of AR with respect to efficiency and productivity: AR yielded higher success rates, significantly shorter time to completion, more positive emotions and less stressed states than paper-based instructions. However, users remark some issues with respect to aesthetics, therefore acknowledging the importance of a human-centered AR design. Another work worth mentioning studied the user experience of a visual analytics system for the drifting data of bees [37]. In such a work, Microsoft HoloLens was compared with a Windows desktop interface through a correlation analysis of the UX ratings collected from 14 test subjects through a 30-min questionnaire using a 7-point Likert scale. The experiment included training, task solving and user feedback. Such a UX feedback evaluated criteria that included how intuitive, easy to use, comfortable, natural and efficient the interfaces are and obtained an overall rate of their interface preference. The study concluded that all the criteria were strongly correlated with the interface preference rating. Efficient was the most strongly correlated with interface preference. This fact suggests that end users, in case of a specific task to solve, they give importance to the efficiency to solve it. Comfortable was the least strongly correlated with interface predilection. Furthermore, while intuitiveness was highly correlated with the use of AR interfaces, the natural metric was highly correlated with the desktop interface. The authors suggest that participants with previous experience with HoloLens found the AR interface intuitive. As a result, such participants may have had a better experience as opposed to first-time users. According to our previous experience with Industry 4.0 operators [39,40], this assumption is highly probable. Taking into account the results of the previous QoE works, it can be concluded that efficiency is one of the most important criteria regarding AR/MR user experience. Since other criteria for QoE assessment like effectiveness or satisfaction are strongly dependent on the AR/MR-IoT application, this work will only focus on efficiency, aiming to minimize task completion time by optimizing user interactions. Such an efficiency, when considering an AR/MR-IoT framework, is strongly correlated with throughput and latency values. Therefore, Section 5 will be devoted to the evaluation of the performance of the proposed framework regarding such parameters. After reviewing the state of the art it can be concluded that there are only a few previous works with similar aims respect to the proposed AR/MR-IoT framework. However, in contrast to the work presented in this article, the vast majority of the previous works presented are very early developments with a relevant number of open issues that require further research. Design and Implementation of the System The main problem that the proposed framework aims to solve is to reduce the existing difficulties to interconnect heterogeneous systems such as those found in IoT and AR/MR devices. For such a purpose, this framework defines different mechanisms and methods to interconnect diverse technologies by using standardized protocols that can be easily integrated into both the target platforms and existing projects. This section describes the challenges of a AR/MR-IoT system, the requirement analysis of the system as well as the design of each layer of the framework and how communications are handled by each component of the system. Challenges of Implementing an AR/MR-IoT System The process of implementing an AR/MR-IoT framework involves different challenges that are often overlooked and that should be considered during the design phase. As it was previously mentioned, one of main challenges is the heterogeneity of the technologies that need to be used in order to build a complete AR/MR-IoT system. This usually involves making low-level decisions about the architecture and software on diverse areas of knowledge. Another important challenge is latency. In order to provide a proper user experience, it is necessary to keep response times low for the different communication protocols, component interactions and processing tasks. If a part of the system is related to delays in the information flow, a bottleneck will exist that will eventually affect the whole system, harming its usability. Both AR/MR and IoT systems are usually complex, which makes difficult to integrate a new communication architecture into the existing software. This means that the technologies used by an AR/MR-IoT framework to implement the different interactions should be able to be integrated easily into existing projects. This can be achieved by providing well-documented open-source software. Some of the works previously analyzed in Section 2 address some of the mentioned challenges, but it was not found in the literature any work that tackles all of them. Thus, Table 1 compares the features of the proposed framework with the ones of the three most similar previous works. Requirements As it was previously mentioned, one of the most important characteristics of an AR/MR application is the response time to the actions carried out by a user. If the response time is high, the user experience will not be good and the interaction would turn out to be less intuitive and pleasant. In addition, there are other requirements imposed by the type of devices used to implement the system as well as by the cost of implementation. Specifically, the main requirements for a AR/MR-IoT framework are the following: 1. Latency. This is the metric that makes it possible to quantify how fast the system can react to a user actions. According to the International Telecommunication Union (ITU), the following latency times should be considered when assessing visual interaction performance [41]: • Human reaction time (i.e., the latency between a human sensing a stimulus and responding with a muscular reaction), is roughly 1 s. • Human auditory reaction time is about 100 ms. • Human visual reaction time is in the range of 10 ms. • An interface is said to have a fast reaction time if it is close to 1 ms. However, it is important to note that the previously indicated latencies are ideal approximations and in practice, when using AR/MR devices, they vary depending on the person, on the type of interaction and on the characteristics of the visualized content. For instance, the indicated human visual reaction time is aimed at providing a good experience when visualizing fast moving objects. In practice, most of the elements presented through an AR/MR interface do not move fast, so a good user experience can be provided despite the existence of latencies higher than 10 ms. 2. Compatibility. The system must be able to interact with heterogeneous devices transparently. In addition, the data types required by an AR/MR interface are very different from those usually required by an IoT device, so an AR/MR-IoT framework must adapt the protocols to the needs of each device. 3. Ease of development. An AR/MR-IoT framework implementation should be easy to replicate and flexible enough to make it easy to modify it to be adapted to new use cases. The upper layers of the framework should allow for such a flexibility. For instance, in the framework presented in this article, the use of a software like Node-RED makes it possible to change data flows graphically, by just changing nodes through a visual interface. 4. Ease of integration. The framework should be easy to integrate within existent applications. As it was mentioned before, the characteristics of current AR/MR and IoT systems are very different, so the framework should be able to integrate every AR/MR and IoT seamlessly. Figure 1 depicts the communications architecture of the AR/MR-IoT framework. As it can be observed, it consists of two main layers: the IoT Node Layer and the AR/MR-IoT Layer. The former is divided into two sublayers: • The AR/MR Device Sublayer is composed by the diverse AR/MR devices (e.g., smart glasses, smartphones, tablets), which are able to exchange information with the AR/MR-IoT Layer through a wireless Access Point (AP). The pink arrows of Figure 1 represent the communications related to AR/MR devices. • The IoT Device Sublayer is formed by the different IoT networks managed by the system. Such networks are composed essentially by sensors and actuators, which may be part of other systems like smart appliances, industrial machinery or home automation systems. The green arrows of Figure 1 indicate the main paths followed by IoT device communications. Note that the communications topology represented in Figure 1 is a mesh, but other topologies may be considered (e.g., a star). Regarding the AR/MR-IoT Layer, it is responsible for interconnecting AR/MR and IoT devices. It may be implemented on a remote cloud on the Internet or on a local edge computing server (e.g., a fog computing node or a cloudlet [40,42]). Due to restrictions imposed by the compatibility requirement, the system has to be designed to be interoperable, as it has to be able to allow for communicating very heterogeneous devices. For such a purpose, it was designed to use standard protocols supported by a wide range of devices and applications. It may be tempting to try to implement the entire system using IoT communication protocols like Message Queuing Telemetry Transport (MQTT), but, unfortunately, they are currently not appropriate for implementing AR/MR-IoT applications: such applications often require sending large amounts of information (e.g., related to the displayed 3D models or to real-time streams of data) that are difficult to handle through MQTT and similar IoT protocols, which have been designed for managing small payloads. This would not fulfill the ease-of-use requirement, since designing a protocol for managing large amounts of data through the mentioned IoT protocols is usually complicated and not very efficient. Moreover, in some AR/MR development environments it is not easy to make use of protocols like MQTT due to the restrictions imposed by the available AR/MR frameworks and high-level APIs. As the ease-of-integration requirement indicates, the framework should be integrated easily, but the cost of implementing MQTT or similar protocols on top of proprietary APIs, if possible, would be higher than other alternatives already available for the target AR/MR framework. Nonetheless, according to the framework requirements, protocols like MQTT can be used only for communicating IoT devices, while other solutions need to be devised to communicate such devices with AR/MR platforms. Therefore, a mixed approach is required to fulfill all the framework requirements. Considering the previous aspects, this article proposes a design that is essentially composed by the following software components: • REST Application Programming Interface (API). It is in charge of managing the data exchanges related to the AR/MR devices. Many AR/MR frameworks offer a very high-level programming API that make it difficult to access low-level communications functionality (in some cases it is even impossible). A REST API ensures compatibility with most of the existing AR/MR frameworks in a simple way and it is flexible enough to be added to existing projects with little effort, thus fulfilling the ease-of-integration requirement. • MQTT broker. It allows for implementing a publish/subscribe service that enables IoT devices to send and to receive data asynchronously. It is based on MQTT, which is an open standard protocol that is really lightweight, so it can be easily implemented even in resource-constrained IoT devices. This protocol is widely used in heterogeneous IoT networks and it is supported by many IoT platforms. It also provides other appealing features for IoT networks like low-latency, low-power and delay tolerance. This component is essential for the whole framework, since it is the one responsible for actually interconnecting the different AR/MR and IoT components. As the ease-of-development requirement states, it is essential for this component to be easily configurable so that it can be changed to fit each application. In practice, this component delivers the collected IoT data to the AR/MR devices that request them and sends commands from such AR/MR devices to IoT devices. Apart from protocol translation, it also performs caching operations, storing all recent data values shared by the IoT devices. Thus, response times are reduced since data are always available when a user application requests them, even when the device is in low-power mode, temporarily busy or unavailable. Furthermore, the exchanged data flows are processed within this component so that each device receives the information in the most appropriate format (thus fulfilling the framework compatibility requirement). Figure 1. Communications architecture of the proposed system. Figure 2 shows the component model of the system, which illustrates the main inputs, outputs and processing components of the AR/MR framework, so that further research can be carried out based on it. Regarding the inputs, they include multiple data sources that generate information that is later fed into the processing components. Such data sources are: Component Model • Surrounding IoT devices: they generate data on their sensors and internal state. • AR/MR device sensors: they collect information from their embedded sensors, which allow for detecting where the user currently is, where he/she has previously been, what is he/she looking at or his/her posture. • User profile: the framework can make use of relevant information on the user in order to later use it to determine the potential actions that he/she can carry out or the information he/she can access to. For example, in a smart home, it may be interesting to limit the actions that children can perform on certain appliances. Moreover, a user profile can be linked to past actions with the objective of using them as contextual information for future actions. For instance, the framework could infer the habits of a user from his/her past actions (i.e., from the sequence of his/her daily actions) and prepare the IoT devices with which the user interacts more frequently before the user requires their use (thus decreasing interaction latency and decreasing energy consumption). • External inputs: certain external inputs, either from remote users or external services (e.g., Network Time Protocol (NTP) or weather forecast services) can influence when and how certain actions are performed on IoT or AR/MR devices. For example, the interactions of an AR/MR user on an IoT device may be different if they are performed during daytime or at night. • System configuration parameters: the administrator of the system can establish specific rules and thresholds that determine how AR/MR user interactions are carried out on IoT devices. With respect to the processing components shown in Figure 2, they gather information from the inputs and perform certain actions as an output. Specifically, Figure 2 includes the main processing elements previously illustrated in Figure 1: an MQTT broker, a REST API, a bridge service and auxiliary services. The proposed framework makes use of rules to process the data collected from the inputs, but the reader can easily observe that such inputs can fed other advanced processing components, like machine learning, deep learning or other artificial intelligence modules, which can fuse the input information and take specific decisions. Finally, regarding the outputs illustrated in Figure 2, they are related to the interaction with surrounding IoT devices (i.e., they allow for acting on them or for collecting certain information from them) and with AR/MR devices (i.e., for displaying certain content or for storing certain data on the AR/MR device memory for their later use). Support for Complex Functionality The proposed framework was designed in such a way that it allows for implementing on top of it complex functionality in a simple way. An example of a complex functionality is related to the nature of many IoT devices: they require to implement an energy consumption policy to provide an adequate compromise between response time and low-power consumption. The proposed AR/MR-IoT framework allows, for instance, to make use of the location information of IoT and AR/MR devices to warn the former when an AR/MR user is close, so that IoT devices are ready in case the user decides to interact with them. Location can be determined by using the surrounding WiFi access points to which the different devices are connected to. The gathered location information can be sent to the framework Bridge Service, which stores it and takes the corresponding actions when a user moves from one area to another. Figure 3 shows a sequence diagram that illustrates how the multiple involved entities would communicate among them. Regarding the involved IoT devices, they first indicate the access point to which they are connected to during their initialization. The AR/MR devices act in the same way when a user moves from one area to another. The Bridge Service is in charge of notifying (by using a specific MQTT topic) the devices that are in a certain area when a user is in its vicinity. An IoT device can remain in a low-power consumption mode until it receives a notification that indicates that an AR/MR user is in the vicinity. Then, it changes its energy saving configuration and starts to make more frequent updates, so response times are significantly reduced. In addition, when a user moves away from an IoT device, such a device can be warned so as to return to its low-power consumption mode. Figure 3 also illustrates the case when User A sends a command to Device 1, which is not in a nearby area, so its response time is larger than when interacting with closer IoT devices. In contrast, Figure 3 depicts the case when User B sends a command to Device 3, which is in the vicinity, so its response time is reduced significantly. Moreover, it can be observed in Figure 3 that IoT devices try to upload to the Bridge Service the data they collect every time they wake up. Since the Bridge Service is in charge of caching the collected information, it can send it directly to the users when they ask for it, instead of waking up IoT devices, thus reducing response times and decreasing IoT device power consumption. Figure 3. Sequence diagram of a low-power system built on top of the framework. Implementation The theoretical architecture detailed in Section 3.3 was implemented as illustrated in Figure 4. All the necessary software to replicate the system is available on GitHub [15]. The next subsections provide details on the different layers and sublayers depicted in Figure 4. IoT Device Sublayer The practical implementation of this sublayer takes into account that IoT devices are usually constrained in terms of computational power and that they have to be efficient in terms of energy when they rely on batteries. As a consequence, the communications protocols used by this sublayer have to be lightweight in relation to the amount of necessary computational resources and required power consumption [43]. Moreover, such protocols should enable mechanisms that allow for responding fast to the different events related to the requests from IoT and remote AR/MR devices. Furthermore, the mentioned protocols should be standard in order to foster interoperability in IoT device ecosystems. The previously mentioned necessary features justify the use of MQTT, which has been recently standardized [44], is lightweight, it is already supported by many IoT devices and is really easy to use: IoT devices simply subscribe to MQTT topics and wait for messages addressed to them. Similarly, if an IoT device wants to send certain collected sensor data or notifications, it just has to publish them on the corresponding MQTT topic. AR/MR Device Sublayer Among the diverse AR/MR hardware devices currently available [45,46], the most popular are smartphones and tablets due their sufficient computational power and low cost. Although such devices may provide practical AR/MR applications, they do not offer an experience as immersive as smart glasses and Head-Mounted Displays (HMDs), whose price has been decreasing during the last years. As of writing, Microsoft HoloLens smart glasses [30] provide the best trade-off between affordability (although they are not cheap: around $ 3500), usability and AR/MR experience. The latter is achieved thanks to the HoloLens acceleration hardware and the provided software platform, which makes use of an operating system adapted to AR/MR development. Such a software platform eases the work of HoloLens developers, but it must be noted that certain low-level developments (e.g., related to raw video processing or communication sockets) are not supported directly by HoloLens Software Development Kit (SDK), so their use requires additional programming effort. In contrast, other communication protocols like HTTP are supported by the SDK, so their use is straightforward. Similarly, other actions that are very useful for creating attractive interactions and immersive experiences are handled out of the box by the SDK. The most relevant HoloLens SDK modules used by the implemented application are summarized in Figure 5). Such modules interact with the HoloLens hardware so as to provide a good AR/MR QoE. The following are their main tasks: • Gaze Manager and Gaze Stabilizer modules: they track the user head orientation and where a user is looking at. • Spatial Mapping module: it obtains and updates a 3D-map of the user surroundings. • Gesture Manager module: it recognizes the hand gestures performed by the user. In Figure 5 the black arrow that departs from the Gesture Manager indicates the path that is followed when a user interacts with the smart glasses: the Button Handler detects the gesture and then calls the Service Module of the AR/MR-IoT framework. If the detected action involves interacting with an IoT device, then the corresponding HTTP request is sent to the AR/MR-IoT Layer (to Node-RED [47], whose inner workings are detailed later). AR/MR-IoT Layer This layer runs on a cloud an MQTT server called Mosquitto [48], Node-RED and a REST API. Thus, the layer routes the messages exchanged between the IoT and the AR/MR devices as follows: 1. When an AR/MR device wants to perform an action on an IoT device (e.g., to collect certain data or to actuate on it), it sends an HTTP request to Node-RED through the REST API. 2. The HTTP request is processed by Node-RED and then it decides to which IoT device it should be forwarded to. Such a decision is usually conditioned by a session token that is embedded on the AR/MR device request. 3. The request is forwarded to the IoT device through an MQTT message that is published on Mosquitto under a specific topic. 4. Since the target IoT device will be subscribed to the topic where the MQTT message was published, it will consume it as soon as such an IoT device receives the new message notification from Mosquitto. 5. To send the data, the IoT device also makes use of MQTT messages through Mosquitto. Such messages are stored by Node-RED and then forwarded to the AR/MR devices that request for updates. As it can be observed from the previous description, the core of the implementation is Node-RED, which enables interconnecting the different elements of the system. Node-RED is a visual-programming tool based on Node.js [49] that allows for managing easily the data flows that connect the multiple components. Therefore, Node-RED acts as a translator of the different protocols embedded into the exchanged messages. A Practical Application: AR/MR Based Energy Monitoring and Control In order to validate the proposed design and to assess the level of compliance of the framework requirements, a practical application was analyzed and designed to provide a reference implementation. Such an application consists of a smart power outlet for monitoring energy consumption that can be controlled from a pair of Microsoft HoloLens smart glasses. Specifically, the proposed system provides AR/MR users with real-time data collected from a commercial smart power outlet. Such users can also control the switching on and off of the power outlet by interacting with a virtual dashboard through gestures. It must be noted that the proposed practical implementation was devised to provide fast response times so as to deliver a good user experience when using the AR/MR application. In addition, it was designed to be easily replicated and to asses easily the compatibility of IoT devices with the proposed framework. The next subsections detail the main components of the system. IoT Smart Socket The used IoT smart power socket is a commercial device (a Sonoff POW module, from Itead Studio [50]) that has a WiFi-enabled microcontroller (ESP8266) and a relay. It also embeds a current sensor (HLW8012) and a Shunt resistor to measure voltage. Figure 6 shows an example of test setup, where a smart socket controls and monitors a lamp. In addition, in Figure 6 there is a QR code label (it is usually glued to the top of the smart socket case): it is used by the HoloLens app to obtain the smart socket id fast, so that the HoloLens glasses can address their requests to such a specific smart socket through the AR/MR-IoT framework. The smart socket can monitor the current that flows through the socket and log the intensity, so it can determine the instantaneous power and the consumed energy. In addition, the smart socket allows for sending ON and OFF commands through a standard MQTT communications channel. Such a channel is also used to send back the collected energy consumption values. The smart socket firmware is based on Tasmota [51], which is one of the most advanced open-source firmware for smart appliances. The firmware has native support for MQTT, so the integration with the AR/MR-IoT framework is straightforward and does not require any further modifications other than configuring the connection parameters to connect to the MQTT broker server. However, the firmware was modified to support other features that allow for programming operating intervals, so that the smart socket can be switched ON and OFF automatically during specific time instants (e.g., when the electricity cost is cheaper). This latter feature requires to obtain and store the daily energy prices and then implement an intelligent planning algorithm that is capable of using the energy price data to make the appropriate decisions. Sonoff POW QR Code Test Appliances Node-RED Configuration and REST API Node-RED is the technology responsible for the bridge service operations. It translates the information between the different protocols used to communicate with both the IoT and the AR/MR device sublayers. It also performs caching operations to improve the latency of the system and modifies data types to make the system interoperable. It was configured as it is shown in the screenshot in Figure 7. At the top of such a Figure are the system initialization, configuration and cleaning nodes that set the sampling periods and configure the associated smart sockets. Below them are the nodes that handle data storage and caching, as well as the REST API endpoints. The connections between the nodes indicate the existing data flows while the nodes represent inputs and outputs as well as data processing nodes. The proposed setup makes it easy to reconfigure the data flows to fit new particular use cases, requiring little effort. The REST API is composed of different endpoints that allow applications to perform all the necessary actions. All of them are exposed through Node-RED HTTP service and are intended for reading and writing data from the applications. A brief description of each of them is given in Table 2. If it is not configured, it returns the list of 16 timers as if they were inactive. /timer Method: PUT. It allows for modifying a timer configuration, so it receives as an input parameter the timer number. /Status Methods: GET, PUT. It returns a JSON object with the status of the smart socket and allows for modifying it through a PUT request. This method works in best-effort mode: the interface is updated before the node response is known. In case of error the state will be updated again as soon as possible to reflect a consistent state. For security reasons, all endpoints accept as input parameters the smart socket identifier and a cookie with an authentication token that identifies the user. MQTT Topics The MQTT message hierarchy was structured around topics divided into three levels. The first level was related to the root MQTT topics, which were defined depending on the type of message to be handled: • /cmnd: topics with this root are requests sent from a client to an IoT device. • /stat: topics with this root are always responses to cmnd-type messages. • /tele: topics with this root correspond to messages sent by the devices periodically to update their status. The second topic level of the message hierarchy corresponds to the socket identifier, which includes the user token and the unique hardware identifier of the socket with the format token_hwId. The last level of the hierarchy indicates an action or is related to certain information: • /sensor: It is used for obtaining the consumption and power data of the socket. Finally, it is worth pointing out that the messages exchanged through the REST API and MQTT are JSON files, which allow for standardizing the data types defined by the system. Figure 8 shows the main virtual dashboard of the developed AR/MR application, which can be moved by the user throughout the real-world scenario. The dashboard includes two virtual buttons to switch on and off the socket, a graph that shows the historical power consumption values and two displays that indicate the instantaneous consumed power and the power factor. AR/MR Application Users can interact with the dashboard by clicking on the virtual buttons through hand gestures. Such gestures are first captured by the HoloLens SDK and then processed by the Service Module of the AR/MR-IoT framework, which is the responsible for sending the switch on/off commands to the smart power outlet and for collecting from it the power consumption data. All the communications that come from the AR/MR application make use of HTTP, which is natively supported by the HoloLens framework. This makes it possible to implement all the communications using the standard APIs provided by Microsoft without any further modifications. This fact, together with the use of MQTT for IoT device communications, allows the framework to provide wide compatibility, which would not be possible without the intervention of the bridge layer to fuse both protocols. Use Cases In order to describe the inner workings of the developed AR/MR-IoT smart socket system, the next subsections detail with the help of UML sequence diagrams several examples of relevant use cases. Figure 9 shows the message flow required to perform a switch-on request. Such a request is initially sent via HTTP to the Node-RED REST API. The request is then translated to an MQTT query by an intermediate service and then sent through the MQTT broker to the smart power socket, which is switched on. Then the smart socket confirms the execution of the remote command by sending an acknowledgment message to Mosquitto, which forwards it to Node-RED. The same process is carried out for switching off the smart socket, but just changing the ON parameter for an OFF. Daily Hourly Energy Cost To obtain the information on the energy cost per hour, in the case of Spain, it is necessary to access the API of Red Eléctrica Española (REE) [52]. Unfortunately, such an API returns an unfiltered XML file with a lot of information, whose parsing may be really slow, especially when it is carried out by resource-constraint IoT devices. To avoid this issue and thus speed up the process, it is used an intermediate service (called Pricing Service) that is in charge of filtering the information and converting it to a JSON file so that the information is much less verbose. This whole process is illustrated in Figure 10. Instantaneous Energy Values This use case illustrates the process that is performed when the IoT smart socket notifies its status, which includes the following fields: Total energy consumed since the socket was installed. • The time since the last reset of the smart socket. The sequence diagram in Figure 11 illustrates the process for collecting the status of a smart socket. It is important to emphasize that Node-RED is in charge of obtaining the data from the sensors periodically and of caching them with the objective of releasing the storage load from the IoT socket. Thus, the status data are always available in the cache of the bridge layer, even though the IoT device may be in low-power mode, not available or even not connected to the Internet during certain time intervals. This feature is very important in order to fulfill the required response time requirement and therefore to provide an appropriate user experience on real-time AR/MR applications. HoloLens Node-RED GET Experiments The framework requirements indicated in Section 3.2 like 'compatibility', 'ease of development' and 'ease of integration' can be assessed qualitatively in relation to the development experience during the implementation of the proposed smart socket-based application. However, this Section will be focused on the latency, which can be evaluated quantitatively by determining how low are response times (i.e., how a good is the user experience associated to interactions). As it was concluded in Section 2.1, UX is key in AR/MR applications: if the virtual objects respond slowly to the user interactions, the perceived experience will not be good. Due to the need for achieving low-response times and providing that latency is a quantitative requirement that can be evaluated empirically, three sets of tests were performed to measure the response latency of the proposed AR/MR-IoT framework. The first set was aimed at estimating how fast the framework manages interaction requests. The second set of tests quantified the speed of the framework when updating the collected IoT data. Finally, the third set of tests analyzed the main interaction latencies for the use case detailed in Section 4. As it can predicted, as the number of users and/or devices in the system increases, the load on the framework server increases accordingly, which impacts QoE. However, it is important to note that the load of each individual device remains constant regardless of the number of connected devices. This fact makes it possible to test the system emulating a large number of real devices, which would be really expensive (a pair of Microsoft HoloLens currently costs $ 3500). To validate the use of emulated devices, 30 time-spaced requests were made to the framework first from a real device (a pair of Microsoft HoloLens glasses) and then from an emulated device while the framework server was idle. The obtained results are shown in Figure 12 After validating the use of emulated devices, the rest of test environment was deployed. The cloud server used a virtual machine that run on a DELL Power Edge R415. Such a virtual machine had 4 GB of RAM and a 2-core AMD Opteron processor at 3.1 GHz. In addition, for both sets of tests, up to 500 devices were emulated in a desktop computer (i.e., a script executed on a desktop computer performed exactly the same requests as 500 real smart power sockets) with the objective of estimating the performance limits of the framework without depending on the reduced computational capacity of the embedded hardware devices (i.e., in practice, a microcontroller-based smart socket would not be able to perform as many requests per second as a desktop computer). It is important to note that the aim of the tests was to assess empirically the framework performance limit for the above-mentioned hardware. Such an evaluation helped to analyze the behavior of the framework with increasing computational loads and made it possible to estimate the hardware that would be necessary for a real environment. To increase the capacity of system, it is possible to use more powerful hardware or scale the system horizontally. To do so, IoT devices can be divided into areas that are managed by cloudlets, which are closer than a cloud, so they manage less IoT devices and provide faster response times. However, such an architecture will require making use of specific protocols to coordinate the different cloudlets and thus sharing the information among them when it is required. AR/MR Interaction Performance In this first set of tests several requests (e.g., switch on/off commands) were sent to the smart power outlet through the REST API. Such requests are internally handled by the AR/MR-IoT framework by first sending them to Node-RED and then to the MQTT broker (Mosquitto). Thus, performance was measured from the moment the request was issued by the HoloLens to when it was parsed and executed by the smart socket. The 'Interaction' line in Figure 13 shows how response latency increases with a growing number of interaction requests per second. As it can be observed, interaction latency remains stable and under 200 ms for up to 50 requests per second. Then, for more than 50 requests per second, latency increases rapidly, reaching almost 6 s for 90 requests per second. This behavior can be further analyzed by observing the 'Interaction' line in Figure 14, which depicts the number of requests per second that the AR/MR-IoT framework is able to handle as the load of the server increases (for this second experiment, such a load was represented by an increasing number of simulated concurrent clients that performed interaction requests). As it can be observed in Figure 14, the number of interaction requests per second increases with the number of concurrent clients until it flattens between 200 and 300. For 300 concurrent clients the framework is able to process 746 interaction requests per second, but, after that point, the server becomes overloaded and starts to respond more and more slowly. This fact can be easily noticed from the server side, since its RAM memory usage increases rapidly. Figure 14. Throughput of the framework when increasing the load of the system. Therefore, the previous experiments indicate that there is a throughput bottleneck on the framework: either Node-RED or Mosquitto slow down interaction request processing. To determine which of both is the responsible, it was measured Mosquitto throughput. The results are depicted in Figure 15 and show that the MQTT broker can handle more than 200,000 requests per second with no problem. As a consequence, it can be easily concluded that the bottleneck is Node-RED. In fact, the underlying problem is that, in contrast to Mosquitto, Node-RED was not specifically designed to handle a large number of concurrent requests. The system throughput does not increase over 200 concurrent clients. This fact suggests that the system is not capable of supporting more concurrent requests. However, for more than 200 concurrent clients what it is relevant is the amount of requests per second that the framework is able to handle, since it establishes the limit of requests that different users can make per unit of time. It is relevant to indicate that tests were finished at 500 test users because the throughput limit was already exceeded. Nonetheless, note that test clients emulate the requests made by a real client, but not the time intervals between each request (for these experiments, the emulated clients sent requests continuously in order to find the system performance limit). Therefore, the actual number of clients supported by framework will depend on the frequency with which such clients make requests. IoT Data Update Performance In a second set of tests, data collected from the sensors were requested using the REST API. It is important to note that in these experiments only the AR/MR-IoT framework REST API performance was measured when accessing the IoT data stored on the local database or in memory, so the MQTT broker was not involved. Thorough tests were performed for a different number of concurrent clients. The 'Data request' line in Figure 14 shows how many IoT data requests per second the framework is able to handle for up to 500 concurrent clients. Such a line shows that the system is able to handle up to roughly 1300 request per second from 150 concurrent clients and then becomes overloaded. In addition, it can be observed that the framework performance was clearly superior for this experiment than when processing interaction requests. This is due to the fact that IoT data update requests are answered directly by Node-RED, so it is not necessary to send a message over MQTT. Finally, the 'Empty request' line in Figure 14 represents the performance of the framework when it responds to IoT data update requests, but when such data are not collected from the local database, but from memory. Thus, the framework reaches 1500 requests per second for 150 concurrent clients, but then the framework becomes overloaded. Therefore, the obtained results show that the accesses to the database during IoT updates have a limited impact on the framework performance. Practical Use Case Latency Analysis There are two different types of response times relevant for the AR/MR-IoT application described in Section 4. The first one is related to the communication issued from the AR/MR application when sending a command to the smart power outlet. For sending such a command, the HoloLens glasses first send an HTTP request to the bridge service, which processes it and translates it into an MQTT command that is sent to the smart power outlet. The response time required for such a set of processes follows Equation (1), where: • t interaction : it is the total interaction time required to send a command to the smart power outlet. • t HTTP : it is the time required by the HoloLens glasses to perform an HTTP request that asks the IoT device to execute the sent command. • t MQTT : it is the time required to send the command by using the MQTT protocol. • t bridge and t smart_power_outlet are the times required by the bridge service and the smart power outlet, respectively, to process the command requests. It is important to note that Equation (1) measures the interaction time of a one way request (i.e., the time that a user will perceive as the one required for executing the command on the smart power outlet (e.g., to switch it on or off)), so the request response time (i.e., the protocol acknowledgment of the execution of the command) is not taken into account for evaluating user experience. After measuring t interaction for the proposed AR/MR-IoT application under normal operation conditions (i.e., when a HoloLens user performs less than a request per second), it was determined that the average latency when interacting with the smart power outlet was 96.16 ms, with a standard deviation of 17.24 ms. The second type of response time that was measured on the proposed AR/MR-IoT application was related to data requests. For example, such requests can ask for the instant power consumption of a smart power outlet. In this type of requests, the bridge service cache comes into play by serving the information stored in its memory and by requesting periodically updated data from the IoT devices. This reduces considerably the latency as the system does not have to wait for the IoT device to answer the request since it is already cached. The response times related to data update requests follow Equation (2), where t data_update stands for the total data update latency, t HTTP is the time related to the necessary HTTP exchange (it includes both the HTTP request and its response) and t bridge is the processing time required by the bridge service. The average value obtained for t data_update with the proposed AR/MR-IoT application under usual loads (i.e., for less than one request per second), was estimated in 95.64 ms, with a standard deviation of 38.85 ms. As a conclusion, it can be stated that the obtained results show that the proposed AR/MR-IoT framework is able to achieve latencies lower than 100 ms when the server is not under a heavy load. This is enough to fulfill the requirements defined in Section 3.2 and also is within the reference values set by the ITU in [41], considering that the virtual elements shown on the screen of the AR/MR application are most of the time static (therefore, a latency below 10 ms is not required). It is also relevant to mention the work described in [53]: after a thorough analysis of the quality of the user experience for different video-games, the authors determine that such an experience can be considered as good for an interaction latency below 150 ms if the movement of the scenes is low or medium (as it occurs in most AR/MR applications). Finally, it is worth mentioning that the proposed practical system was evaluated when considering a scenario where the server was hosted on a remote cloud. In case of needing faster response times, they can be further decreased by replacing the remote cloud by smaller cloudlets closer to the end devices. Conclusions This article presented an AR/MR-IoT framework that eases the integration of AR/MR and IoT devices. The framework makes use of well known open-source protocols and tools that together allow for communicating AR/MR and IoT devices dynamically and in real time. After reviewing the state of the art, this paper described the design and implementation of the framework, and provided thorough details on its use for a practical application: an AR/MR-controlled energy monitoring application based on a smart power outlet and Microsoft HoloLens. In order to evaluate the performance on the AR/MR-IoT framework, it was evaluated under different amounts of computational load. The obtained results show that, under normal operation conditions, the framework is able to respond in less than 100 ms to interaction and data update requests. However, for more than 50 requests per second, the operations performed by Node-RED suppose a latency bottleneck whose enhancement requires to manage data access faster.
2020-06-18T09:08:58.446Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "2a75f1b84efb90e00f514d2497a7febf5d03d0cd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/11/3328/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6803b6fefa5bf16546a5902081614f5097026351", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
118640039
pes2o/s2orc
v3-fos-license
Study of morphology and stellar content of the Galactic HII region IRAS 16148-5011 An investigation of the IRAS 16148-5011 region - a cluster at a distance of 3.6 kpc - is presented here, carried out using multiwavelength data in near-infrared (NIR) from the 1.4m Infrared Survey Facility telescope, mid-infrared (MIR) from the archival Spitzer GLIMPSE survey, far-infrared (FIR) from the Herschel archive, and low-frequency radio continuum observations at 1280 and 843 MHz from the Giant Metrewave Radio Telescope (GMRT) and Molonglo Survey archive, respectively. A combination of NIR and MIR data is used to identify 7 Class I and 133 Class II sources in the region. Spectral Energy Distribution (SED) analysis of selected sources reveals a 9.6 Msolar, high-mass source embedded in nebulosity. However, Lyman continuum luminosity calculation using radio emission - which shows a compact HII, region - indicates the spectral type of the ionizing source to be earlier than B0-O9.5. Free-free emission SED modelling yields the electron density as 138 cm^{-3}, and thus the mass of the ionized hydrogen as ~16.4 Msolar. Thermal dust emission modelling, using the FIR data from Herschel and performing modified blackbody fits, helped us construct the temperature and column density maps of the region, which show peak values of 30 K and 3.3x10^{22} cm^{-2}, respectively. The column density maps reveal an AV>20 mag extinction associated with the nebular emission, and weak filamentary structures connecting dense clumps. The clump associated with this IRAS object is found to have dimensions of ~1.1 pc x 0.8 pc, and a mass of 1023 Msolar. INTRODUCTION Most star formation activity is known to take place in clusters (Lada & Lada 2003), and as such, observational studies of young embedded stellar cluster regions are imperative, because they serve as a template to further investigate various associated processes and their signatures. Due to the youth of such regions and the fact that their natal medium has still not been dispersed, these clusters can be used to scrutinize various theories related to star formation, stellar cluster dynamics, as well as stellar and cloud evolution. Of even more importance are the cluster regions which harbour high-mass stars, partly because such regions are few and far between, and more so as high-mass star formation is not very well understood (Zinnecker & Yorke 2007). E-mail: kshitiz@tifr.res.in IRAS 16148-5011 is an infrared nebula ( Fig. 1) in the southern sky (α2000 = 16 h 18 m 35.2 s , δ2000 = −50 o 18 53 ) associated with which is an IR cluster found using the Two Micron All-Sky Survey (2MASS) data by Dutra et al. (2003). It is located at the Galactic plane (l ∼ 333.047 o , b ∼ +0.037 o ) and is in the vicinity of the well-known star-forming region RCW 106. Though other star-forming regions are present nearby, IRAS 16148-5011 appeared to be a relatively isolated region in past mappings (Karnik et al. 2001; Mookerjea et al. 2004). Kinematic distance estimates to this region vary from ∼ 3.3-11.9 kpc (near-and far-distance estimates; Molinari et al. 2008), and we adopt the distance of 3.6 kpc from Lumsden et al. (2013) (based on the spectrophotometric distance of a prominent source -"G333.0494+00.0324B" in their nomenclature -associated with the central nebula 1 ) for our work. The compilation of Lumsden et al. (2013) also reveals other nearby regions at this distance, providing preliminary indications that this could be a part of larger complex. In the early analyses of Haynes, Caswell, & Simons (1979) (also see Chan, Henning, & Schreyer 1996), radio continuum emission at 6 cm was detected, with peak in the neighbourhood ( 1 , positional accuracy ∼ 30 ) of this region, and it was predicted to be harbouring massive young stellar objects. The IRAS colour analysis by MacLeod et al. (1998) was also found to be consistent with that for an H II region, and Molinari et al. (2008) -using the IRAS " " colour value -have suggested that this region is among the younger IRAS detected regions. A high-mass stellar source was detected by Grave & Kumar (2009) with the help of spectral energy distribution (SED) fitting using NIR to millimeter data. Analyses of this region at 1.2 mm dust continuum emission and at molecular lines have revealed the presence of dense gas, with a large column density, as well as massive clumps. The total luminosity estimate (∼ 4.4×10 4 L , by integrating IRAS flux densities) has also been found to be well in the regime of high-mass stellar objects (Beltrán et al. 2006;Fontani et al. 2005). Therefore, taking into account these characteristics, this region makes a good candidate to carry out an investigation for understanding the morphology and stellar population. However, since this is a possible H II region with an embedded cluster, multiwavelength observations are required to fully discern this region's various constituents and how they relate to each other. In this paper, we have tried to accomplish this using deep NIR observations, archival MIR Spitzer data, archival FIR Herschel data, and low-frequency radio continuum observations. In Section 2, we detail the various observations and the corresponding data analysis procedures, followed by an examination of the stellar population in the region in Section 3. The morphology of the region is discussed in Section 4. Discussion and conclusions are presented in Sections 5 and 6, respectively. Near-Infrared Observations NIR photometric observations in J (1.25 µm), H (1.63 µm), and Ks (2.14 µm) bands (centered on α2000 ∼ 16 h 18 m 31 s , δ2000 ∼ −50 o 17 32 ) were carried out on 2004 July 29 using the 1.4 m Infrared Survey Facility (IRSF) telescope, South Africa. The observations were taken with the help of the Simultaneous InfraRed Imager for Unbiased Survey (SIRIUS) instrument, a three colour simultaneous camera mounted at the f/10 Cassegrain focus of the telescope. SIRIUS is equipped with three 1k×1k HgCdTe arrays, each of which, with a pixel scale of 0.45 , provide a field of view (FoV) of 7.8 ×7.8 . Further details can be obtained from Nagashima et al. (1999) and Nagayama et al. (2003). Five sets of frames, with each set containing observations at ten dithered positions (exposure time of 5 s at each dither position), were obtained (i.e. total exposure time = 5×10×5 s = 250 s, in each band). The sky conditions were photometric, with a seeing size of ∼ 1.35 . In addition to the target field, a sky region (∼ 10 to the north of target region) and the standard star P9172 (Persson et al. 1998) were also observed. Following a standard data reduction procedure, which involved bad pixel masking, dark subtraction, flat-field correction, 1 see http://rms.leeds.ac.uk/ sky subtraction, combining dithered frames, and astrometric calibration, point spread function (psf) photometry was carried out using the ALLSTAR algorithm of the DAOPHOT package in IRAF. About 11-13 sources were used to construct the psf for each band. Finally the instrumental magnitudes were calibrated using the standard star P9172. The astrometric calibration rms obtained was < 0.05 , and the median photometric error < 0.05 mag. On comparing our catalogues with the 2MASS catalogues, sources with Ks magnitude 9.5 were found to be saturated, and hence had their J, H, and Ks magnitudes replaced by the corresponding 2MASS magnitudes. The final NIR catalogue is used to identify the young stellar objects (YSOs) in the region. The area of study in this paper is about 5.5 ×5.5 encompassing the nebular cloud region, centered on α2000 ∼ 16 h 18 m 35 s , δ2000 ∼ −50 o 19 18 . Completeness limits were calculated for all three bands by carrying out artificial star experiments using the ADDSTAR package in IRAF. A fixed number of stars were added in each 0.5 magnitude bin and analysis carried out to see how many stars are detected. The ratio of the number of detected stars to the number of added stars gives us the completeness fraction as a function of magnitude. The 90% completeness limit was thus calculated as 16.6, 15.8, and 15.5 for the J, H, and Ks bands, respectively. Radio Continuum Observations Radio continuum observations at 1280 MHz were obtained on 2012 November 09 using the Giant Metrewave Radio Telescope (GMRT) array. The GMRT array consists of 30 antennae arranged in an approximate Y-shaped configuration, with each antenna having a diameter of 45 m. This translates to a primary beam-size of 26.2 at 1280 MHz. A central region of ∼1 km×1 km contains 12 randomly distributed antennae, while the remaining 18 are along the three radial arms (6 along each arm) which extend upto ∼14 km. Details about the GMRT can be found in Swarup et al. (1991). For our observations, the Very Large Array (VLA) phase and flux calibrators '1626-298' and '3C286' , respectively, were used. The total observation time (including the calibrators) was about 3.5 hrs, limited by the low declination of the source. Data reduction was carried out using the AIPS software. Initial steps involved flagging the bad data (carried out using a combination of 'VPLOT-UVFLG' and 'TVFLG' tasks) and calibration (carried out using 'CALIB-GETJY-CLCAL' ). After a few iterations of flagging and calibration, the source data was 'SPLIT' from the whole, and was used for imaging using the task 'IMAGR' . A few rounds of (phase) selfcalibration were also carried out using the task 'CALIB' to remove any ionospheric phase distortion effects. To check that the flux calibration was done correctly, the image of the flux calibrator was constructed, and its flux determined and checked against literature values. The final (target) source images were rescaled to take into account the system temperature corrections for the GMRT. These corrections are required as, at the Galactic plane, the large amount of radiation at meter wavelengths increases the effective temperature of the antennae. This was done as follows. Using the sky temperature map of Haslam et al. (1982) at 408 MHz, and the spectral index of -2.6 given therein, the temperature towards this region was obtained, i.e. T f requency = T408 × (f requency/408 MHz) −2.6 , where T408 is the temperature at 408 MHz and T f requency is the temperature at required f requency (1280 MHz here). The images were subsequently rescaled by a factor given by (T f requency + Tsys)/Tsys, i.e. the ratio of temperature towards the target to that towards the flux calibrator. Tsys, the system temperature, was obtained from the GMRT manual 2 . In addition to the GMRT observations, we also obtained the archival first epoch Molongolo Galactic Plane Survey (MGPS) data at 843 MHz 3 (Green et al. 1999). The MGPS observations were carried out using the Molonglo Observatory Synthesis Telescope (MOST) 4 with a resolution of 43 ×43 cosec |Declination| . For our analysis purpose, we retrieved the original processed image for this region's Galactic coordinates (resolution ∼ 55.84 ×43 ) from the website. The radio images are used to obtain the physical parameters and examine the morphology of the ionized gas in the region (see Section 4.2). Spitzer mid-infrared observations Archival MIR observations of this region, obtained using the Spitzer Space Telescope under the Galactic Legacy Infrared Midplane Survey Extraordinaire (GLIMPSE) program (Benjamin et al. 2003;Churchwell et al. 2009), were retrieved with the help of the InfraRed Science Archive 5 (the Spring '07 Archive more complete catalogue). GLIMPSE observations were taken using the InfraRed Array Camera (IRAC) in 3.6, 4.5, 5.8, and 8.0 µm bands. The final image cutouts (pixel scale of 0.6 ) as well as the final catalogues were downloaded. While the images are further used to analyse features in the region, the photometric catalogue, in conjunction with the NIR catalogue (Section 2.1) was used to identify the YSOs in the region. Herschel far-infrared observations This region has been observed at FIR wavelengths, in the range 70-500 µm , using the instruments Photodetector Array Camera and Spectrometer (PACS; Poglitsch et al. 2010) and Spectral and Photometric Imaging Receiver (SPIRE; Griffin et al. 2010) on the 3.5 m Herschel Space Observatory (Pilbratt et al. 2010), as a part of the Proposal ID "KPOT smolinar 1" (Molinari et al. 2010). For our analyses, we obtained the PACS 70 µm and 160 µm level2 5 MADmap images (Cantalupo et al. 2010); and the SPIRE 250 µm , 350 µm , and 500 µm level2 5 extended source ("extd-PxW" ) map products using the Herschel Science Archive 6 . The 70 µm, 160 µm , 250 µm , 350 µm , and 500 µm images have pixel scales of 3.2 , 6.4 , 6 , 10 , and 14 , respectively, with resolutions varying from ∼ 5.5 -36 . While the PACS image obtained had the surface brightness unit of Jy pixel −1 , the SPIRE images were in the units of MJy sr −1 . Detailed information about the data products is provided on the Herschel site 7 . We use the FIR data to examine the physical conditions of the region. Identification of YSOs The NIR and MIR photometric catalogues from IRSF (Section 2.1) and GLIMPSE (Section 2.3.1), respectively, were cross-matched within 0.6 matching radius and collated to obtain a combined photometric catalogue. Thereafter, the following set of steps were followed for the identification of the YSOs (similar to Mallick et al. 2013) : (i) First, the YSOs were identified using their MIR magnitudes. The sources with detections in all four IRAC bands with errors 0.15 mag were used here. Using simple linear regression, the IRAC spectral index (αIRAC = d log(λF λ )/d log(λ); Lada 1987) was calculated for each source (in the wavelength range 3.6-8.0 µm), followed by their classification into Class I and Class II categories using the limits from Chavarría et al. (2008) (αIRAC > 0 for Class I, and −2 αIRAC 0 for Class II; see Fig. 2(a)). (ii) All sources need not have detections at 5.8 and 8.0 µm, but might have good quality detections in NIR bands. To identify the YSOs from amongst such sources, we use a combination of H, Ks, 3.6 µm, and 4.5 µm bands following the procedure of Gutermuth et al. (2009). Again, only those sources whose 3.6 and 4.5 µm magnitude errors are 0.15 are used here. The YSOs were identified from their location in the dereddened -using the colour excess ratios from Flaherty et al. (2007) -"Ks-[3.6]" versus "[3.6]-[4.5]" colour-colour diagram (CCD), as is shown in Fig. 2 (iii) Additional YSOs were identified using the J − H/H − K CCD (Fig. 2(c)) according to the following procedure. In Fig. 2(c), the red solid curve marks the dwarf locus from Bessell & Brett (1988) and the blue solid line marks the Classical T Tauri Stars (CTTS) locus from Meyer, Calvet, & Hillenbrand (1997). All the loci curves as well as sources' colours were converted to CIT (California Institute of Technology) photometric system (using Carpenter 2001) for this analysis. The slanted dashed lines are the reddening vectors, drawn using the reddening laws of Cohen et al. (1981) for the CIT photometric system. In this CCD, three separate regions have been marked, similar to Ojha et al. (2004a,b). The sources in 'T' and 'P' regions are taken to be Class II sources (Lada & Adams 1992), as they exhibit IR-excess emission. In the 'P' region, since there could be a slight overlap between Herbig Ae/Be stars and Class II sources (Hillenbrand et al. 1992), we conservatively took only those sources which were above the CTTS locus extended into this region (similar to Mallick et al. 2014). It should be noted that sources in the 'T' region could also contain a few Class III sources with small IR-excess. Finally, we merged the sources identified in each step. An overlapping source might have different identifications in different steps. Thus, in the final list, the class of a YSO was taken as that in which it was identified as first in the above order of steps. A total of 7 Class I and 133 Class II sources were obtained (for a FoV of ∼ 5.5 ×5.5 encompassing the molecular cloud, as marked in Fig. 1) in the final YSO catalogue, which is given in Table 1. cavities carved out by a bipolar outflow. A total of 200000 SED models are computed in a 14 dimensional parameter space (covering properties of the central source, the infalling envelope, and the disk), using the radiation transfer code of Whitney et al. (2003a,b). The online fitting tool attempts to fit the available SED models to the data, characterising each fitting by a χ 2 parameter. The distance range and the interstellar visual extinction (AV ) are free parameters whose range has to be specified by the user. Accounting for the uncertainties, we adopted a large distance range of 3.4 to 3.8 kpc for our sources. From extinction calculations (see Section 4.1) we find that almost all non-YSO sources had AV 20 mag, and thus we specified the range of interstellar visual extinction AV as 1-20 mag. Spectral Energy Distribution of YSOs Since the number of SED models is very large, spanning a wide range of parameter space, the models fitting each source can only be constrained by increasing the number of data points used, and having data points to sufficiently cover the entire wavelength range of fitting. For this reason, we choose the sources with photometry in at least all four IRAC bands for SED modelling. However, in addition to those selected using this criterion, we also carried out SED analysis for the prominent sources associated with the central nebula (even though two of them lacked 8.0 µm photometry). We note that photometry from Herschel images is not useful here as none of the YSOs have counterparts at those wavelengths, which in part is due to poor resolution. For each source, the SED fitting tool -besides giving the best fit model -also gives a set of well fit models ranked by their χ 2 values as a measure of their relative "goodness-of-fit". Following a method similar to Robitaille et al. (2007), we consider only those models for further calculation which satisfied the following criterion : As elucidated in Robitaille et al. (2007), though this criterion is based on visual examination of SED plots and has no rigid mathematical backing, a stricter criterion might lead to overinterpretation. For each parameter, the weighted mean value and standard deviation were calculated using the models which satisfied Equation (1). The inverse of the respective χ 2 was taken as the weight for each model (similar to Grave & Kumar 2009). Table 2 gives our SED modelling results, listing the physical parameters : age of the central source (t * ), mass of the central source, disk mass (M disk ), disk accretion rate (Ṁ disk ), envelope mass (Menv), temperature of the central source (T * ), total system luminosity (L total ), interstellar visual extinction (AV ), and the χ 2 min per data point. Errors for some parameters tend to be large as we are dealing with a large parameter space while we have very few data points to constrain the number of models. If a source was simply fit better as a star with high interstellar extinction, it has not been included in the table. Thus finally, SED results are given for a total of 3 Class I sources, 24 Class II sources (including 2 central sources), and one extra central source. Even though the statistics for the YSOs (i.e. Class I and Class II sources) is not very significant, we can still use them to get an idea of the physical parameters of the stellar sources forming in this region. As can be seen from Table 2, there appears to be a considerable age dispersion, with the ages ranging from ∼ 0.05 Myr to 0.5 Myr for most of the sources. 5 YSOs even have ages > 1 Myr. This is suggestive of ongoing star formation. All the YSOs analysed here yield masses > 2 M , and five of them > 6 M . One of the YSOs (#16) appears to be a high-mass star of ∼ 9.6 M , and is embedded in the nebular emission associated with this region (see Section 4). The SED plot for this source is shown in Fig. 3. Grave & Kumar (2009) had also done SED analysis for what they mention as an embedded point source in this IR nebula with JHK and four IRAC bands. Since among the possible embedded sources in the central part of this IR nebula, only this source (#16) had all 7 magnitudes, most likely their '16148-5011mms2near' refers to this source itself. Grave & Kumar (2009) calculated the age and mass of this source as 4.2±0.3 (log t * ) and 11±1 M , respectively. Though the mass estimate appears consistent (within error limits) with our results, their age estimate is much lower. It is probable that this could be because they use different distance estimates and 1.2 mm fluxes from literature. This source is also classified as a YSO by Lumsden et al. (2013, called "G333.0494+00.0324B" ). Another source (#28) associated with the nebula appears to be a high-mass stellar object, with age > 1 Myr, though it is not classified as a YSO in our analysis. It should be noted that the SED results are only representative of the actual values, as an empirically consistent fit might not be the correct fit. A deliberate sampling bias of the huge parameter space, to reduce computational time, could give rise to pseudotrends in the results. The results are also contingent upon the validity of assumed evolutionary tracks from literature. Most importantly, the models are for individual sources, and could be misleading in cases of stellar multiplicity. The caveats are dealt in detail in Robitaille (2008). Luminosity Function The slope of the Ks-band luminosity function (KLF) can serve as an indicator of the age of a stellar cluster (Zinnecker, McCaughrean, & Wilking 1993;Lada & Lada 1995;Vig et al. 2014). If we were to assume that the mass function and the mass-luminosity relation for a (coeval) stellar cluster are power laws, i.e. they are of the form dN (log m * ) ∝ m −γ * d log m * and LK ∝ m B * , then it can be shown that the slope of the KLF will be of the form : (2) (Lada, Young, & Greene 1993;Megeath 1996). First of all, we try to estimate the KLF slope. We only consider the YSOs here, as opposed to all the observed sources, as they will be much less affected by any field star contamination. Our K-band 100% completeness limit is upto 14 mag, and thus completeness correction was implemented in the (0.5 mag sized) bins after this limit. Thereafter, the (cumulative) KLF of the YSOs was constructed, and is shown in Fig. 4. The fit to the histogram is also shown, and its slope (d log N/dmK ) in [12,15.5] mag range is calculated to be (α =) 0.35±0.04 (the slope will be same for cumulative and differential KLFs here, see Lada, Young, & Greene 1993). As for the slope of mass-luminosity relation, if we adopt the value of B = 2 (which can be shown to be the approximate value for O-F main sequence stars; Lada, Young, & Greene 1993), then the mass function slope comes out to be (γ =)1.75±0.20 (a value slightly steeper than the Salpeter slope of 1.35). Alternatively, we could get an estimate of the mass function slope using the SED fitting results. Since most of our sources are of intermediate mass in 2-6 M range (see Table 2), we consider only this range to estimate the slope γ. Fig. 5 shows the mass histogram (for the YSOs). Assuming that the star formation is strictly coeval, the mass function slope is given by γ = −(d log N/d log m * ) (Massey 1998), and thus we calculate γ ∼ 1.59±0.70 for our case. The grey curve in Fig. 5 shows the fitted function for 2-6 M . This is consistent with γ obtained above. The major source of uncertainty here is the sparse statistics, and thus possible incompleteness in the mass bins. Mass Spectrum In addition to the SED, we also use the J/J − H colour-magnitude diagram (CMD) to get an estimate of the mass range of the YSOs. We avoid using a CMD involving the K-band magnitude as this band is the most affected by the NIR excess flux arising from circumstellar material, which in turn can lead to brightening, and thus erroneous mass estimate, of the sources. Fig. 6 shows the J/J − H CMD for the YSOs with at least J and H band detections. The 1 Myr PMS isochrone, along with the 2 Myr isochrone for reference, from Siess, Dufour, & Forestini (2000) has been shown on the image. For the 1 Myr PMS isochrone, reddening vectors are shown for 0.1, 1, 2, and 4 M . As can be seen, all but one of the YSOs for which the SED analysis has been done (marked with blue stars) lie in the mass range 2 M , matching well with our SED results. In general, the sources lie in the mass range ∼ 0.1-4 M , or, put differently, the observations probe stellar objects upto the 0.1 M limit. The (blue star) source at the far-right end of the diagram is the 9.6 M high-mass source from SED analysis. The wide variation in the colours of YSOs is probably an indication of variable extinction as well as different evolutionary stages of the sources. Major causes of uncertainty here are : uncertainty in distance estimate which is used to obtain the apparent magnitude (for the isochrones), and unresolved (especially since the source distance of ∼ 3.6 kpc is much larger than most of the previously studied clusters) binarity. Additionally, it should be kept in mind that, in general, a particular PMS model will introduce its own systematic error. Table 2). The surface density and extinction contours were calculated as follows. The surface density analysis was carried out using the nearest-neighbour (NN) method (Casertano & Hut 1985) to discern the YSO clusterings in the region. We chose 20 NN, similar to Schmeja, Kumar, & Ferreira (2008); Schmeja (2011); Mallick et al. (2013). The extinction in the region was estimated by constructing the extinction map with the help of the NIR photometric data. We use the NIR CCD (see Fig. 2(c)) for this purpose. In this NIR CCD, the 'F' region mostly contains the main-sequence field sources along with a few probable Class III sources. Since these are sources which have almost lost their circumstellar material, any extinction they exhibit will come from interstellar -rather than circumstellar -material. The sources from this 'F' region, which were not identified as a YSO in Section 3.1, were therefore selected. Subsequently, we used a method similar to the Near-IR-Colour-Excess method of Lada et al. (1994), but here the (H − K) colour excess was estimated by dereddening the sources -along the reddening vector -to the low-mass end (turnover onwards) of the dwarf locus (which was approximated as a straight line). AV was then calculated using the reddening laws of Cohen et al. (1981). After we obtained the visual extinction AV for each source, an extinction map of the region was made by using the NN method, where the extinction at each grid point on the map is the median (because median rejects outliers) of AV of 20 NN. It is possible that the extinction could be slightly underestimated because, due to the large distance to this region, a significant fraction of the detected sources could be foreground sources, specially towards the center of the nebula. Cluster Analysis and Extinction mapping In Fig. 7, the surface density contours are drawn at 5, 5.75, 6, 7, 7.5, and 8 YSOs pc −2 , while the extinction contours have been drawn at AV = 4, 4.5, 5, 6, and 6.5 mag levels. As can be seen on the image, both the surface density as well as extinction contours are in the southern portion of the nebular emission and appear to be along the sharp boundary of the nebula. The highest surface density contour levels coincide with the highest extinction levels. Though we would have expected to see high interstellar extinction along the main body of the nebula, this is not so possibly because the NIR observations are not deep enough to detect stars from behind the nebula and thus the extinction of the nebula should be higher than the highest extinction contour level here (it is later estimated to be AV > 20 mag; see Section 4.4) It should also be noted that the cluster detected here is in the southern part of the nebula also coincident with extended radio continuum emission (see Section 4.2). Fig. 7. The central core appears to be a compact H II region, near whose peak lie the mm peak and the high-mass source. The 843 MHz contours show extended emission in addition to the central compact region. The extended emission is only in the southern part and none in the northern part of the compact H II region, indicating the presence of dense molecular cloud in the northern part which is not ionized to the same extent as the southern region. The background 8.0 µm image also shows the diffuse nebular emission in the north. Using the AIPS task JMFIT, the compact cores at both the frequencies were fit with a Gaussian model to determine the source sizes and the fluxes. The obtained results are given in Table 3. The beamdeconvolved source size for MGPS 843 MHz is found to be much larger in area than that for 1280 MHz. The larger integrated flux density for 843 MHz could be because of this. Fig. 9 shows the maximum resolution image of the region which could be obtained at 1280 MHz (∼7 ×2 ), overlaid on the Herschel 70 µm image. This contours show multiple peaks which is probably indicative of the clumpy nature of the ionized matter in the region. These peaks are mostly conincident with large 70 µm emission, which is to be expected as 70 µm also traces the thermal dust emission. Radio Morphology We tried to estimate the physical parameters of the region, using the lower resolution images, as follows. The Lyman continuum luminosity (in photons s −1 ) required to generate the observed flux density was determined using the following formula (adapted from Kurtz, Churchwell, & Wood 1994 (3) where Sν is the integrated flux density in Jy, D is the distance in kiloparsec, Te is the electron temperature, a(ν, Te) is the correction factor, and ν is the frequency in GHz at which the luminosity is to be calculated. The dynamical age of the H II region (t) can be solved for by using the following equation from Spitzer (1978) : where R(t) is the radius of the H II region at time t, cII is the speed of sound in H II region (11×10 5 cm s −1 ; Stahler & Palla 2005), and Rs is the Strömgren radius. The Strömgren radius (Rs, in cm) is given by (Strömgren 1939) : where no is the initial ambient density (in cm −3 ), and β2 is the total recombination coefficient to the first excited state of hydrogen. For our calculation, we assume a typical value of 10000 K for Te (which will imply a value of 0.99 for the correction factor 'a' ; see Table 6 of Mezger & Henderson 1967), and the corresponding β2 of 2.6×10 −13 cm 3 s −1 (Stahler & Palla 2005). For no, we use the value of 4.8×10 4 cm −3 from Beltrán et al. (2006). R(t) is taken as the geometric mean of fitted Gaussian source sizes from Table 3. Using these formulae and the 1280 MHz data, we calculated S * and t as ∼ 10 47.41 photons s −1 and ∼ 0.3 Myr, respectively. If we were to use the 843 MHz flux density data point, then S * and t come out to be ∼ 10 47.73 photons s −1 and ∼ 0.5 Myr, respectively. Assuming ZAMS, a comparison of log S * with the tabulated values from Panagia (1973) shows that a spectral type of B0-O9.5 corresponds to this luminosity. Recent calibrations, like Martins, Schaerer, & Hillier (2005), also suggest a (luminosity class V) spectral type of ∼ O9.5. So, it seems that the spectral type of the source ionizing the region (assuming a single source) has to be earlier than B0-O9.5 for the ionization in the nebula to be sustained, since there can be absorption of ionizing photons by dust in the region which is often significant (Arthur et al. 2004). The dynamical age is approximately of the order of a few tenths of Myr. We also try to fit to our data the free-free emission model of Mezger & Henderson (1967), according to which (adapted from Mezger, Schraml, & Terzian 1967) : where, Sν is the integrated flux density (in Jy), ν is the frequency (in MHz), ne is the electron density in cm −3 , l is the extent of the ionized region in pc, τ is the optical depth, and Ω is the solid angle subtended by the source (in steradians). n 2 e l measures the optical depth in the medium (in cm −6 pc), and is called the emission measure. Taking Ω as (1.133 × θmajor × θminor), and the two data points, we fit the above equation using non-linear regression keeping n 2 e l -the emission measure -as a free parameter. The fit is shown in Fig. 10. The emission measure is thus determined to be ∼ 4.00 ± 0.09 × 10 4 cm −6 pc. Since most of the central radio emission is confined within a circle of 120 diameter, we can assume it to be the extent of the H II region (∼ 2.1 pc), and thus the electron density (ne) turns out to be ∼ 138 cm −3 . Further, using the formula from Mezger, Schraml, & Terzian (1967, Equation A.5), we calculate the total mass of ionized hydrogen (MHII) to be ∼ 16.4 M . These low values of ne and MHII, as well as the extent, would suggest that this region might be slightly more evolved than a compact H II region (Kurtz & Franco 2002). Fig. 11 shows the central 1 × 1 region of IRAS 16148-5011 in the NIR J and Ks bands, IRAC 3.6 µm , 4.5 µm , and 8.0 µm bands, and the MIPS 24 µm band. The IRAC 3.6 µm and 8.0 µm bands, besides the continuum emission, also encompass the weak PAH emission feature at 3.3 µm , and strong PAH features at 7.7 µm and 8.6 µm . The 4.5 µm band does not contain any PAH features, but does contain shocked molecular gas emission from H2(v = 0 − 0) S(9, 10, 11) and CO(v = 1 − 0). 24 µm emission is mainly the thermal continuum emission from the hot dust (Watson et al. 2008;Churchwell et al. 2009). Central region The KS band image has been marked with, among others, the high-mass source detected (green cross, #16 in Table 2) Table 1.1), while the radio flux suggests an ionizing star of type B0-O9.5. In addition, Molinari et al. (2008), using SED modelling (assuming an embedded ZAMS source) and the MSX flux values, have estimated the spectral type of 'IRAS 16148-5011 A' source as O8. Therefore, it appears that there could be further embedded high-mass source(s) in this region, in addition to those seen in NIR and MIR, for the values to be consistent. It is possible that the cores seen by radio contours in Fig. 9 could be hosting such high-mass source(s) and contributing to the radio luminosity. The other sources associated with this central part of the nebula have been marked on the 3.6 µm image (#13 and #28 from Table 2, Section 3.2). The source #13 appears to be a highly embedded intermediate-mass young source (∼ 6.17 M , ∼ a few 10 4 yr; see Section 3.2), even visible at 24 µm. Source #28 appears to be a much older and evolved source (∼ 8.48 M , ∼ a few 10 6 yr), possibly shrouded in the surrounding nebulosity (interstellar extinction of AV ∼ 18.38 mag), leading to a lack of detection in the J band (though it is the second brightest source after the 9.6 M -#16 from Table 2 -source in the central region in Ks band). The radio continuum emission contribution from this source will be much lower, hardly making a difference even when taken in combination with the emission from the high-mass source, and thus will not affect the inferences regarding radio emission above. For H II regions, the PAH emissions serve as useful diagnostic tools as they are characteristic of photo-dissociation regions (PDRs). As a high-mass star ionizes its natal medium, the UV radiation destroys the PAH in its surroundings. However, at the boundary of the H II region produced, the UV intensity falls off, and a PDR is formed where the PAH's are merely highly excited, lead-ing to strong emission (Povich et al. 2007). The PDR is supposed to be the transition region between the ionized and neutral matter. This usually results in a 'bubble' morphology, where H II regions surrounded by ring-like PAH emission are seen (Churchwell et al. 2006). A similar morphology can be seen in the 3.6 µm and the 8.0 µm image in Fig. 11. A 'semi-ring' in the western half of the image is seen (marked by cyan arrows on the 8.0 µm image). We note that this feature is also faintly seen in the 4.5 µm image, though this could just be continuum emission. That this ring-like feature does not appear symmetric (i.e. there is no clear eastern 'semi-ring' , though faint indications are seen) could possibly be due to eastern denser and/or non-homogeneous molecular cloud, or projection effects. Volk et al. (1991), on the basis of low-resolution spectra, had classified IRAS 16148-5011 as a PAH source. Bulk of the radio emission (see Fig. 8), as well as the 24 µm emission from hot dust, is confined within this ring-like feature as expected (Watson et al. 2008). Some of the emission from this ring-like structure could also be due to swept-up material by the expanding ionization front of the H II region. Herschel Results Thermal emission from cold dust lies in the FIR wavelength range, and thus its analysis can be used to obtain the physical parameters like dust temperature and column density of a region (Launhardt et al. 2013;Battersby et al. 2011). This was carried out by the SED modelling of the thermal dust emission, whose Rayleigh-Jeans regime is covered by the Herschel FIR bands (160-500 µm), using the following order of steps. First of all, the surface brightness unit for all the images was converted to Jy pixel −1 . Since the PACS image is already in Jy pixel −1 , this step was only required for the SPIRE images (whose units are in MJy sr −1 ), and was carried out using the pixel scales for the respective SPIRE bands. Next, the 160-350 µm images were convolved to the resolution of the 500 µm image (∼ 36 , lowest among all images) using the convolution kernels of Aniano et al. (2011), and regridded to a pixel scale of 14 (same as 500 µm image). Using these final reworked images with same resolution and pixel scale, a background flux level, I bg , was determined from a "smooth" (i.e. no abrupt clumpy regions) and relatively "dark" patch of the sky. The distribution of individual pixel values in the dark patch of the sky, for each of the bands, was fitted with a Gaussian iteratively, rejecting the pixel values outside ± 2σ in each iteration, till the fit converged. The same patch of the sky was used for each band. The background flux level, I bg , was thus determined as 0.29, 2.67, 1.27, and 0.45 Jy pixel −1 for the 160 µm, 250 µm, 350 µm, and 500 µm images, respectively. We note that the 70 µm image, though available, was not used here as the optically thin assumption might not hold true at this wavelength (Lombardi et al. 2014). Modified blackbody fitting was subsequently carried out on a pixel-by-pixel basis using the following formulation (Battersby et al. 2011;Sadavoy et al. 2012;Nielbock et al. 2012;Launhardt et al. 2013) : with where, ν is the frequency, Sν (ν) is the observed flux density, I bg (ν) is the background flux in that particular band (estimated above), Bν (ν, T d ) is the Planck's function, T d is the dust temperature, Ω is the solid angle (in steradians) from where the flux is obtained (just the solid angle subtended by a 14 ×14 pixel here), τ (ν) is the optical depth, µH 2 is the mean molecular weight (adopted as 2.8 here), mH is the mass of hydrogen, κν is the dust opacity, and N (H2) is the column density. For opacity, we adopt a functional form of κν = 0.1 (ν/1000 GHz) β cm 2 g −1 , with β = 2 (see André et al. 2010, Beckwith et al. 1990, Hildebrand 1983. For each pixel, Equation 8 was fit using the 4 data points, keeping T d and N (H2) as free parameters. Pixels for which the fit did not converge, or the error was larger than 10%, had their values taken as the median of 8 immediate-neighbour pixels. The final obtained temperature and column density maps of the wider region (to clearly discern the morphological features) surrounding IRAS 16148-5011 are shown in Fig. 12. From the temperature ( Fig. 12(a)) and column density ( Fig. 12(b)) maps, we obtain the peak values as ∼ 30 K and 3.3×10 22 cm −2 , respectively, for IRAS 16148-5011. Fontani et al. (2005) provide a temperature of 38 K (using greybody fit with 60 µm , 100 µm , and 1.2 mm data), and a (beam-averaged) column density of 3×10 23 cm −2 (using C 17 O molecular line observations) for IRAS 16148-5011. Here our peak temperature is about 20% lower, and the column density estimate an order of magnitude smaller. The difference in temperature seems to be due to their fitting limitations owing to sparse data points, as well as the fact that the 60 µm emission might not be optically thin. It should be noted, however, that the dust temperature distribution from Fontani et al. (2005) (for their analysed catalogue of IRAS sources) peaks at ∼ 30 K. Column density disparity can probably be explained by the dependence of Fontani et al. (2005) on the molecular abundance of the rare C 17 O, whose value can have wide variations (Redman et al. 2002;Walsh et al. 2010). The column density map ( Fig. 12(b)) displays three peaksthe central IRAS 16148-5011 object, a peak to its north-east, and a peak to its south. These north-east and south peaks appear to be associated with cold clumps, as is evident from the temperature map which shows T d < 20 K at their positions. The column density map shows that all the peaks appear to be connected by weak filamententary features, similar to previous results for myriad regions (see André 2013, and references therein). For the central IRAS 16148-5011 object, we estimated the associated clump dimensions, using the "clumpfind" software (Williams, de Geus, & Blitz 1994), to be ∼ 62 ×46 (i.e. 1.1 pc×0.8 pc at 3.6 kpc). Due to the low resolution of the column density image generated here, it does not appear possible to resolve further sub-clumps. The mass of the clump can be estimated by : i.e. calculating the mass in each pixel and then summing over all the pixels which constitute the clump. Using the Σ N (H2) returned by the "clumpfind" software, the total clump mass was calculated to be ∼ 1023 M . Another noticeable feature in this map is that the immediate northern vicinity of the central region exhibits higher column density than the immediate southern portion. This suggests dense nebula in the northern part, affirming the inference also drawn from extinction and cluster analysis in Section 4.1 (also see Fig. 7). This is also consonant with the fact that the dense nebular emission is in the northern part, with no extended radio emission seen there (unlike in the southern part). To get an estimate of the visual extinction in this northern part, we use the relation N (H2)/AV =0.94×10 21 molecules cm −1 mag −1 (adapted from Bohlin, Savage, & Drake 1978, assuming a total-to-selective extinction ratio RV =3.1, and that the gas is in molecular form). Now, typical N(H2) seen here is 2×10 22 cm −3 , which implies an AV > 20 mag. This high value of AV is probably the reason why very few YSOs are associated with this northern nebular emission (see Fig. 7). If the value of RV were to be larger, as has been conjectured for dense environments, then it will lead to an increase in AV . DISCUSSION The IRAS 16148-5011 region appears to be hosting an infrared cluster containing high-mass star(s) embedded in the nebula, and could serve as a future template to study the high-mass star formation process. Age estimates for any cluster are usually plagued with uncertainties (see Lada & Lada 2003, for a discussion), and various proxies are often used to ascertain the evolutionary stage of a cluster. One such is the ratio of Class II to Class I sources (as Class II sources are older) (Schmeja, Klessen, & Froebrich 2005;Beerer et al. 2010;Gutermuth et al. 2009). We have detected 133 Class II and 7 Class I sources in this region. However, here it seems that the large ratio (19) could be because Class I sources are deeply embedded and their detection is affected by large nebular extinction, though varying ratio values have been estimated for other clusters (Gutermuth et al. 2009). The two brightest (NIR) sources in the central region appear to be high-mass (see Section 4.3) with indications of futher embedded high-mass sources. Also in literature, Beltrán et al. (2006) report two clumps within 90 of the IRAS 16148-5011 IRAS catalogue position -'16148-5011 Clump 2' (206 M ) and '16148-5011 Clump 3' (42 M ) -detected using 1.2 mm emission and a dust temperature of 30 K. While 'Clump 2' , is almost coincident with the '16148-5011 MM 1' peak from Molinari et al. (2008) (marked with a diamond symbol on Fig. 11), 'Clump 3' coordinates are same as the IRAS 16148-5011's IRAS catalogue coordinates (green plus in Fig. 11). The presence of these massive clumps could also point towards ongoing high-mass star formation in the region. Though this part of the sky also harbours other IRAS sources (Karnik et al. 2001), IRAS 16148-5011 does not appear to be a part of any larger star-forming complex as such, but it seems to be connected to other clumps via filaments (similar to the hub-filament morphology of Myers 2009). The high-mass stellar cluster formation in the associated molecular cloud appears to be spontaneous. However, along the rather sharp southern boundary of the molecular cloud, the results from cluster analysis show a stellar sub-cluster forming (Fig. 7), which could partly be due to the triggering by the expanding ionization front of the H II region. CONCLUSIONS The main conclusions of this paper, resulting from a multiwavelength study involving NIR, MIR, FIR, and radio continuum data, are as follows : (i) 7 Class I and 133 Class II YSOs are identified using a combination of NIR and MIR data. A 9.6 M high-mass source is found to be associated with the central nebula. (ii) Low-frequency radio emission reveals a compact H II region, with extended emission in the southern part. Lyman continuum photon luminosity calculation gives B0-O9.5 as the lower limit for the spectral type of the ionizing source (assuming single source). The dynamical age of the H II region is in 0.3-0.5 Myr range. SED modelling of the free-free emission yields an electron density of 138 cm −3 . The mass of the ionized hydrogen is calculated to be ∼ 16.4 M . The high-resolution 7 ×2 contour map shows clumpy ionized gas in the region. (iii) The central nebular region shows a ring-like PAH emission feature near the borders of the compact H II region, tracing the PDR. There appear to be three NIR-and MIR-visible central sources, of masses ∼ 6.2 M , 9.60 M , and 8.5 M . Based on the incongruency between the total radio flux from these sources and the flux obtained from radio observations, and literature estimates of an early type star using MIR SED fitting, it is possible that there could be high-mass embedded source(s) present in this region. (iv) Dust temperature and column density maps are obtained using SED modelling of the thermal dust emission. The peak temperature and column density values are 30 K and 3.3×10 22 cm −2 , respectively, for IRAS 16148-5011. The column density map reveals that the immediate northern vicinity of IRAS 16148-5011, which contains the nebular emission seen prominently at MIR wavelengths, has a large extinction of AV > 20 mag. This map also shows that weak filamententary structures join IRAS 16148-5011 to nearby cold clumps. The size and mass of the clump associated with IRAS 16148-5011 is estimated to be ∼ 1.1 pc×0.8 pc (at 3.6 kpc) and ∼ 1023 M , respectively. Future observations of individual objects in spectral lines, deeper IR data to get a full stellar census upto below brown-dwarf limit, further molecular line observations which probe high column densities will help put the star formation scenario on a firm footing and help study high-mass star formation. Table 1 is available in its entirety in a machine-readable form in the online journal. A portion is shown here for guidance regarding its form and content. photometry; c : Identified as "G333.0494+00.0324B" (and also classified as a YSO) in Lumsden et al. (2013). (Bessell & Brett 1988) and the CTTS locus (Meyer, Calvet, & Hillenbrand 1997), respectively. The grey dot-dashed line is the extension of CTTS locus. Three parallel slanted dashed lines mark the reddening vectors, drawn using extinction laws from Cohen et al. (1981). Three separate regions, 'F' , 'T' , 'P' have been labelled on the plot. The source marked with a cross in 'P' region is a high-mass YSO (see text). Table 2. The black dots mark the data points. The solid black curve is the best fitted model, while the grey curves denote the subsequent good fits for χ 2 − χ 2 min (per data point) < 3. The dashed curve is the photosphere (in the presence of interstellar extinction, but absence of circumstellar dust) of the central source for the best-fit model. . The grey dashed histogram shows the cumulative KLF for the YSOs. The black straight line is the fit in [12,15.5] mag range, whose slope is given by α. (Table 2). The grey curve shows the power law fit in the intermediate mass range. Figure 7. Spitzer 8.0 µm image of the region with overlaid surface density contours (at 5, 5.75, 6, 7, 7.5, and 8 YSOs pc −2 ) in cyan, and visual extinction contours (at A V = 4, 4.5, 5, 6, and 6.5 mag) in blue. Green plus symbol marks the IRAS catalogue position of IRAS 16148-5011, green cross the high-mass source (#16 from Table 2), while the diamond and box symbols mark the millimeter and MSX peaks, respectively, from Molinari et al. (2008). 4,5,7,11,13,15,17,19,20, and 21 σ (where σ ∼ 0.4 mJy). The contours are from the maximum resolution (7 ×2 ) radio continuum image that could be constructed at 1280 MHz. The symbols are same as Fig. 7.
2014-12-04T12:57:48.000Z
2014-12-04T00:00:00.000
{ "year": 2015, "sha1": "7207f0db285e678868f587a92214b098a5ca970e", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/447/3/2307/17335884/stu2584.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "7207f0db285e678868f587a92214b098a5ca970e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
101255838
pes2o/s2orc
v3-fos-license
Exploration of Cd ( II ) / pseudohalide / di-2-pyridyl ketone chemistry – Rational synthesis , structural analysis and photoluminescence Table S1. Crystal data and structure refinement for 1, 2 and 4-9 complexes. Table S2. Crystallographic data and refinement details of M helical structure 1. Table S3. Short intraand intermolecular contacts detected in the structures 2-9. Table S4. The selected bond lengths [Å] and angles [] for 1. Table S5. The selected bond lengths [Å] and angles [] for 2. Table S6. The experimental and calculated bond lengths [Å] and angles [] for 4. Table S7. The selected bond lengths [Å] and angles [] for 5. Table S8. The selected bond lengths [Å] and angles [] for 6. Table S9. The selected bond lengths [Å] and angles [] for 7. Table S10. The selected bond lengths [Å] and angles [] for 8. Table S11. The selected bond lengths [Å] and angles [] for 9a. Table S12. The selected bond lengths [Å] and angles [] for 9b. Figure S5 . Figures: Figure S1.The XRPD spectrum (Cu−Kα radiation) of 1 (black), together with the calculated pattern from the single crystal data (red).Figure S2.The XRPD spectrum (Cu−Kα radiation) of 2 (black), together with the calculated pattern from the single crystal data (red).Figure S3.The XRPD spectrum (Co−Kα radiation) of 4 (black), together with the calculated pattern from the single crystal data (red).Figure S4.The XRPD spectrum (Cu−Kα radiation) of 5 (black), together with the calculated pattern from the single crystal data (red).Figure S5.The XRPD spectrum (Co−Kα radiation) of 6 (black), together with the calculated pattern from the single crystal data (red).Figure S6.The XRPD spectrum (Cu−Kα radiation) of 7 (black), together with the calculated pattern from the single crystal data (red). Figure S6 . Figures: Figure S1.The XRPD spectrum (Cu−Kα radiation) of 1 (black), together with the calculated pattern from the single crystal data (red).Figure S2.The XRPD spectrum (Cu−Kα radiation) of 2 (black), together with the calculated pattern from the single crystal data (red).Figure S3.The XRPD spectrum (Co−Kα radiation) of 4 (black), together with the calculated pattern from the single crystal data (red).Figure S4.The XRPD spectrum (Cu−Kα radiation) of 5 (black), together with the calculated pattern from the single crystal data (red).Figure S5.The XRPD spectrum (Co−Kα radiation) of 6 (black), together with the calculated pattern from the single crystal data (red).Figure S6.The XRPD spectrum (Cu−Kα radiation) of 7 (black), together with the calculated pattern from the single crystal data (red). Figure S9 . Figure S7.The XRPD spectrum (Co−Kα radiation) of 8 (black), together with the calculated pattern from the single crystal data (red).FigureS8.a) The XRPD spectra (Cu−Kα radiation) of 9 (a+b) (black), together with the calculated pattern from the single crystal data (red for 9a and blue for 9b), b) The powder XRD pattern of compound 9(a+b) (black), together with the calculated pattern from the single crystal data of 9a (red); c) The powder XRD pattern of compound 9(a+b) (black), together with the calculated pattern from the single crystal data of 9b (red).Figure S9.IR spectrum of 1. Figure S10.IR spectrum of 2. Figure S11.IR spectrum of 4. Figure S12.IR spectrum of 5. Figure S13.IR spectrum of 6. Figure S14.IR spectrum of 7. Figure S15.IR spectrum of 8. Figure S16.IR spectrum of 9(a+b).Figure S17.DSC curve of 1. Figure S18.DSC curve of 2. Figure S19.DSC curve of 4. Figure S20.DSC curve of 5. Figure S21.DSC curve of 6. Figure S22.DSC curve of 7. Figure S23.DSC curve of 8. Figure S24.DSC curve of 9(a+b).Figure S25.Perspective views of the asymmetric unit of 4 (a) and 6 (b) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability.Figure S26.Perspective views of the asymmetric unit of 5 (a) and 7 (b) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability.Figure S27.Perspective views of the asymmetric unit of 8 (a), 9a (b) and 9b (c) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability Figure S28.The solid state absorption electronic spectra of (py) 2 CO and the Cd(II) coordination compounds.Figure S29.The UV-Vis spectra of the free ligand and Cd(II) compounds in acetonitrile solution (10 -4 M). Figure S26 . Figure S7.The XRPD spectrum (Co−Kα radiation) of 8 (black), together with the calculated pattern from the single crystal data (red).FigureS8.a) The XRPD spectra (Cu−Kα radiation) of 9 (a+b) (black), together with the calculated pattern from the single crystal data (red for 9a and blue for 9b), b) The powder XRD pattern of compound 9(a+b) (black), together with the calculated pattern from the single crystal data of 9a (red); c) The powder XRD pattern of compound 9(a+b) (black), together with the calculated pattern from the single crystal data of 9b (red).Figure S9.IR spectrum of 1. Figure S10.IR spectrum of 2. Figure S11.IR spectrum of 4. Figure S12.IR spectrum of 5. Figure S13.IR spectrum of 6. Figure S14.IR spectrum of 7. Figure S15.IR spectrum of 8. Figure S16.IR spectrum of 9(a+b).Figure S17.DSC curve of 1. Figure S18.DSC curve of 2. Figure S19.DSC curve of 4. Figure S20.DSC curve of 5. Figure S21.DSC curve of 6. Figure S22.DSC curve of 7. Figure S23.DSC curve of 8. Figure S24.DSC curve of 9(a+b).Figure S25.Perspective views of the asymmetric unit of 4 (a) and 6 (b) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability.Figure S26.Perspective views of the asymmetric unit of 5 (a) and 7 (b) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability.Figure S27.Perspective views of the asymmetric unit of 8 (a), 9a (b) and 9b (c) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability Figure S28.The solid state absorption electronic spectra of (py) 2 CO and the Cd(II) coordination compounds.Figure S29.The UV-Vis spectra of the free ligand and Cd(II) compounds in acetonitrile solution (10 -4 M). Figure S27 . Figure S7.The XRPD spectrum (Co−Kα radiation) of 8 (black), together with the calculated pattern from the single crystal data (red).FigureS8.a) The XRPD spectra (Cu−Kα radiation) of 9 (a+b) (black), together with the calculated pattern from the single crystal data (red for 9a and blue for 9b), b) The powder XRD pattern of compound 9(a+b) (black), together with the calculated pattern from the single crystal data of 9a (red); c) The powder XRD pattern of compound 9(a+b) (black), together with the calculated pattern from the single crystal data of 9b (red).Figure S9.IR spectrum of 1. Figure S10.IR spectrum of 2. Figure S11.IR spectrum of 4. Figure S12.IR spectrum of 5. Figure S13.IR spectrum of 6. Figure S14.IR spectrum of 7. Figure S15.IR spectrum of 8. Figure S16.IR spectrum of 9(a+b).Figure S17.DSC curve of 1. Figure S18.DSC curve of 2. Figure S19.DSC curve of 4. Figure S20.DSC curve of 5. Figure S21.DSC curve of 6. Figure S22.DSC curve of 7. Figure S23.DSC curve of 8. Figure S24.DSC curve of 9(a+b).Figure S25.Perspective views of the asymmetric unit of 4 (a) and 6 (b) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability.Figure S26.Perspective views of the asymmetric unit of 5 (a) and 7 (b) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability.Figure S27.Perspective views of the asymmetric unit of 8 (a), 9a (b) and 9b (c) showing the atom numbering.Displacement ellipsoids are drawn at 50% probability Figure S28.The solid state absorption electronic spectra of (py) 2 CO and the Cd(II) coordination compounds.Figure S29.The UV-Vis spectra of the free ligand and Cd(II) compounds in acetonitrile solution (10 -4 M). Figure S1 . Figure S1.The XRPD spectrum (Cu−Kα radiation) of 1 (black), together with the calculated pattern from the single crystal data (red). Figure S2 . Figure S2.The XRPD spectrum (Cu−Kα radiation) of 2 (black), together with the calculated pattern from the single crystal data (red). Figure S3 . Figure S3.The XRPD spectrum (Co−Kα radiation) of 4 (black), together with the calculated pattern from the single crystal data (red). Figure S4 . Figure S4.The XRPD spectrum (Cu−Kα radiation) of 5 (black), together with the calculated pattern from the single crystal data (red). Figure S5 . Figure S5.The XRPD spectrum (Co−Kα radiation) of 6 (black), together with the calculated pattern from the single crystal data (red). Figure S6 . Figure S6.The XRPD spectrum (Cu−Kα radiation) of 7 (black), together with the calculated pattern from the single crystal data (red). Figure S7 . Figure S7.The XRPD spectrum (Co−Kα radiation) of 8 (black), together with the calculated pattern from the single crystal data (red). Figure S8.a)The XRPD spectra (Cu−Kα radiation) of 9 (a+b) (black), together with the calculated pattern from the single crystal data (red for 9a and blue for 9b), b) The powder XRD pattern of compound 9(a+b) (black), together with the calculated pattern from the single crystal data of 9a (red); c) The powder XRD pattern of compound 9(a+b) (black), together with the calculated pattern from the single crystal data of 9b (red). Table S1 . Crystal data and structure refinement for 1 Table S2 . Crystallographic data and refinement details of M helical structure 1.
2019-03-10T05:39:32.933Z
2016-04-11T00:00:00.000
{ "year": 2016, "sha1": "24d2fd0600d280e8bcaa5958104eb11b88fbf35c", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/ce/c6ce00112b", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "24d2fd0600d280e8bcaa5958104eb11b88fbf35c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
231629250
pes2o/s2orc
v3-fos-license
Efficacy of an Interdisciplinary Intensive Outpatient Program in Treating Combat-Related Traumatic Brain Injury and Psychological Health Conditions Background: Since 2000, over 413,000 US service members (SM) experienced at least one traumatic brain injury (TBI), and 40% of those with in-theater TBIs later screened positive for comorbid psychological health (PH) conditions, including post-traumatic stress disorder (PTSD), depression, and anxiety. Many SMs with these persistent symptoms fail to achieve a recovery that results in a desirable quality of life or return to full duty. Limited information exists though to guide treatment for SMs with a history of mild TBI (mTBI) and comorbid PH conditions. This report presents the methods and outcomes of an interdisciplinary intensive outpatient program (IOP) in the treatment of SMs with combat-related mTBI and PH comorbidities. The IOP combines conventional rehabilitation therapies and integrative medicine techniques with the goal of reducing morbidity in multiple neurological and behavioral health domains and enhancing military readiness. Methods: SMs (n = 1,456) with residual symptoms from mTBI and comorbid PH conditions were treated in a 4-week IOP at the National Intrepid Center of Excellence (NICoE) at Walter Reed National Military Medical Center (WRNMMC). The IOP uses an interdisciplinary, holistic, and patient-centric rehabilitative care model. Interdisciplinary teams provide a diagnostic workup of neurological, psychiatric, and existential injuries, and from these assessments, individualized care plans are developed. Treatment response was assessed using the Neurobehavioral Symptom Inventory (NSI), PTSD Checklist—Military Version (PCL-M), Satisfaction With Life Scale (SWLS), Patient Health Questionnaire-9 (PHQ-9), Generalized Anxiety Disorder-7 (GAD-7), Epworth Sleepiness Scale (ESS), and Headache Impact Test-6 (HIT-6) and administered at admission, discharge, and at 1, 3, and 6 months post-discharge. Findings: Following treatment in the IOP, the symptomatic patients had statistically significant and clinically meaningful improvements across all outcome measures. The largest effect size was seen with GAD-7 (r = 0.59), followed by PHQ-8 (r = 0.56), NSI (r = 0.55), PCL-M (r = 0.52), ESS (r = 0.50), SWLS (r = 0.49), and HIT-6 (r = 0.42). In cross-sectional follow ups, the significant improvements were sustained at 1, 3, and 6 months post-discharge. Interpretation: This report demonstrates that an interdisciplinary IOP achieves significant and sustainable symptom recovery in SMs with combat-related mTBI and comorbid PH conditions and supports the further study of this model of care in complex medical conditions. Background: Since 2000, over 413,000 US service members (SM) experienced at least one traumatic brain injury (TBI), and 40% of those with in-theater TBIs later screened positive for comorbid psychological health (PH) conditions, including posttraumatic stress disorder (PTSD), depression, and anxiety. Many SMs with these persistent symptoms fail to achieve a recovery that results in a desirable quality of life or return to full duty. Limited information exists though to guide treatment for SMs with a history of mild TBI (mTBI) and comorbid PH conditions. This report presents the methods and outcomes of an interdisciplinary intensive outpatient program (IOP) in the treatment of SMs with combat-related mTBI and PH comorbidities. The IOP combines conventional rehabilitation therapies and integrative medicine techniques with the goal of reducing morbidity in multiple neurological and behavioral health domains and enhancing military readiness. Methods: SMs (n = 1,456) with residual symptoms from mTBI and comorbid PH conditions were treated in a 4-week IOP at the National Intrepid Center of Excellence (NICoE) at Walter Reed National Military Medical Center (WRNMMC). The IOP uses an interdisciplinary, holistic, and patient-centric rehabilitative care model. Interdisciplinary teams provide a diagnostic workup of neurological, psychiatric, and existential injuries, and from these assessments, individualized care plans are developed. Treatment response was assessed using the Neurobehavioral Symptom Inventory (NSI), PTSD Checklist-Military Version (PCL-M), Satisfaction With Life Scale (SWLS), Patient Health Questionnaire-9 (PHQ-9), Generalized Anxiety Disorder-7 (GAD-7), Epworth Sleepiness Scale (ESS), and Headache Impact Test-6 (HIT-6) and administered at admission, discharge, and at 1, 3, and 6 months post-discharge. INTRODUCTION During Operation Enduring Freedom (OEF), Operation Iraqi Freedom (OIF), and Operation New Dawn, over 2.7 million US service members (SMs) were deployed worldwide. Most of these SMs were engaged in combat operations and intensive combat training that resulted in ∼17% suffering a traumatic brain injury (TBI) (1). From 2000 through the first quarter of 2019, more than 413,000 SMs worldwide were diagnosed with TBI, and of these, more than 80% were categorized as mild TBI (mTBI) (2). Although many individuals who suffer an mTBI recover uneventfully, a substantial minority report persistent post-concussive symptoms (PCS). The rates of persistent PCS are much higher among those with complicating psychological health (PH) conditions such as post-traumatic stress disorder (PTSD) and in military populations with combat-related injuries (3)(4)(5). A history of combat-related TBI represents a significant risk for developing behavioral health (BH) conditions, with more than 40% of these SMs showing evidence of PTSD, major depressive disorder, or anxiety disorders as compared to ∼9% of SMs without a combat-related TBI (6,7). Exposure to TBI and operational stressors and sustaining life-threatening polytraumatic injuries place SMs at a greater risk for long-term functional sequelae that affect return to full duty and overall quality of life (8). Conventional specialty referral-based approach in many of these chronic clinically complex active duty patients results in fragmented care and reduced efficiency in achieving the desired recovery. Strategies for a more comprehensive and holistic approach are needed to improve and return to duty and achieve favorable outcomes in multiple neurological and BH domains in military personnel with the "invisible wounds of war." Studies of TBI in the military have demonstrated significant and sometimes persistent symptoms even after mTBI (4,(9)(10)(11)(12). In general, persistent PCS are thought to fall into three broad categories: somatic (including vestibular symptoms and headache), cognitive (including sleep disturbance, attention, and memory complaints), and emotional (including post-traumatic stress, emotional regulation, and depression) (13)(14)(15). Interventions designed to address just one domain have had varying levels of success (16). The prevalence of sleep disorders in patients with a history of mTBI is reported at >75% (15,17), and post-concussive headache is reported in nearly half of service members with combat mTBI (18). These are contributing factors in persistent PCSs and can confound the recovery of other symptoms if not addressed early. In active duty military and veteran populations, a number of studies have examined the value of an integrated approach, combining efforts targeting multiple symptoms through cognitive rehabilitation, psychotherapy, and psychoeducational and integrated behavioral health integration to treat those with a history of TBI and PH issues (19)(20)(21)(22). In the Department of Veterans Affairs (VA) care system, the Polytrauma Transitional Rehabilitation Programs (PTRP) have shown benefit in improving functional outcomes in individuals with TBI of mixed severity and comorbid PH issues. The PTRP treatment paradigm relies heavily on remediation of deficit areas and development of compensatory skills, such as memory strategy training, metacognitive strategy training for executive dysfunction, and practice of skills related to social communication (19). Other VA intensive residential programs for mTBI and PH conditions have focused heavily on treating the BH portion of the symptom profile (23), with several studies showing improvements in PCS, mood, or post-traumatic stress symptoms with such an approach. Chard et al. (20) reported on a treatment program that combined cognitive rehabilitation and cognitive processing therapy with educational groups focused on factors like nutrition, reduction of self-defeating behaviors, anger management, spirituality, and stress tolerance for those with TBI of mixed severity. While symptoms of post-traumatic stress declined in the sample overall, those with mild TBI showed less improvement than those with more severe TBI. In the same care setting, individuals with TBI and PTSD showed reductions in PCS concurrent with reduced post-traumatic stress symptoms (24). The Home Base program in Boston, Massachusetts reports a comprehensive program with tracks for those with significant post-traumatic stress or a history of TBI (21). In the PTSD track, the participants engage in daily psychotherapy groups, multiple skills groups based on dialectical behavioral therapy, to improve their interpersonal skills and emotion regulation during the 2-week program. They also attended six sessions of Resilient Warrior, a program adapted from the Relaxation Response Resiliency Program, that focuses on psychoeducation, as well as several complementary integrative health sessions, including art therapy, yoga, Tai Chi, fitness, and nutrition. The participants reported significant improvements in multiple functional domains (22). Overall, the preliminary studies were promising and recommended large studies utilizing symptom-specific outcome measures in multiple domains for complex comorbid SM populations with longitudinal follow-up to test the durability of the treatment effects. Endeavoring to address these gaps and enhance the care offered in the Department of Defense, the National Intrepid Center of Excellence (NICoE), the TBI Directorate at Walter Reed National Military Medical Center in Bethesda, Maryland was designed and built in 2010 as a proof-of-concept to assess the efficacy and sustainability of an interdisciplinary intensive outpatient program (IOP) in the treatment of SMs suffering with persistent symptoms from combat-and mission-related mTBI and co-morbid PH conditions. The program's aim is to place those SMs who had plateaued in their recovery and were deemed unlikely to experience additional symptom improvement on a trajectory to return to full duty. The care model is a 4-week interdisciplinary, holistic IOP that uses traditional rehabilitation, and neurological and BH treatments combined with integrative medicine interventions and skillsbased training. The IOP uses a co-localized 17-discipline team to expedite diagnostic evaluation that leverages each specialty team member to build on each other's expertise to achieve common goals and to develop a collaborative care plan. The patient is at the center of the care team, enhancing patient-provider rapport, enabling a more efficient identification of goals for recovery, and providing immediate feedback of response to treatment. The rehabilitative culture of the care team emphasizes patients' learning self-efficacy and selfadvocacy techniques to enhance sustainable recovery beyond program discharge. We hypothesized that the NICoE 4-week IOP would improve symptoms in SMs with mTBI and PH conditions from pre-to post-program and sustain these improvements at 1, 3, and 6 months post-discharge. To evaluate the clinical effectiveness of the program, the primary analysis consists of seven domain-specific outcome scales at admission and discharge. Durability was assessed using the same self-report scales in cross-sectional follow-up at 1, 3, and 6 months after the SMs returned to their duty station. This study takes the next step in assessing interdisciplinary integrative medicine programs by measuring multi-domain outcomes in the real-world application of an IOP and collected longitudinal data to characterize the durability of health outcome improvements in a large patient population of activeduty SMs suffering from chronic co-morbid combat-related mTBI and PH conditions. Participants Active-duty SMs from all branches of the US armed services and National Guard were referred by their duty station primary care provider (PCP) or coordinating specialist to the NICoE at WRNMMC for enrollment in the 4-week IOP from August 2011 through February 2019. All enrolled patients had medically documented persistent or worsening mission-and combat-related mTBI and PH symptoms following unsuccessful treatment by multiple healthcare disciplines. All referred service members were at a minimum of 6 months post-injury, with an average of 5.09 years post-brain injury, prior to engaging in the program. TBI diagnosis was based on at least one qualifying event as specified by the Department of Defense (DoD) and VA guidelines. The NICoE Continuity Management Team, a team of social workers and nurse case managers, coordinated the assessment of eligibility with team physicians and compiled relevant clinical care administered to the SMs prior to their admission. The information was then reviewed by members of the interdisciplinary care team in advance of the initial intake interview with the SMs. The SMs remained on active duty during the IOP and attended the program voluntarily. They are supported by their commands who detail them to Walter Reed as their duty station. All SMs must have a referring physician from their duty station since the IOP generates a robust discharge summary and recommendations that are conveyed by the team to the referring provider during a warm handoff so as to maintain a smooth continuity of care. TBI number and type were characterized as blast, non-blast, and mixed exposure to blast and non-blast events ( Table 2). Data on TBI were obtained from a detailed intake questionnaire cross-referenced with the neurology specialist intake history and physical examination. Up to six SMs were admitted to the NICoE IOP each Monday and navigated through the 4-week IOP as a therapeutic cohort. The SMs and their family members who attended, usually in the latter weeks of the program, resided in a dedicated living facility on campus designed to extend the NICoE's therapeutic milieu. The SMs admitted to the IOP were invited to participate in a WRNMMC Institutional Review Board (IRB)-approved database protocol. Of these, 91.2% of patients consented, which allowed for the collection of all acquired clinical data relevant to TBI, psychological and medical comorbidities, and its storage in a coded database prior to, during, and following the program. Consent also allowed for follow-up contact of all participants through telephone contact and/or electronic questionnaires once they return to their duty station. Under an IRB-approved data use protocol, the study analyses were performed on de-identified data from the coded database in accordance with all federal laws, regulations, and standards of practice as well as those of the DoD and the departments of Army/Navy/Air Force. Procedures: Model of Care The model of care in this interdisciplinary program is based on three foundational principles: (1) immediately provide a safe and trusting environment and address sleep disturbance and pain, (2) use a patient-centric approach to facilitate selfidentification of the physical and existential injuries resulting in major post-concussive symptomatology and suffering, and (3) provide training and education to optimize long-term selfefficacy and self-advocacy. Following an intensive pre-admission review of the patients' medical record, each patient engages in a group interdisciplinary intake with core team members on the first day of the program. The core team members include an internist, neurologist, psychiatrist, neuropsychologist, family therapist, and a designated nurse specialist who serves as the SM's "touchstone" throughout the program. At the center of this team is the patient, who, with their family, communicates a narrative of events that led to the physical, psycho-social, and existential injuries and shares their goals for recovery. During the first 2 weeks of the program, the patients undergo a progression of standardized evaluation of assessment tools, spanning providers from 17 disciplines who coordinate a care plan taking into consideration all relevant diagnoses. The clinical schedule is then customized to the clinical needs of each patient. A patient engages in 6-8 h per day of clinical care, totaling ∼105-130 h of patient-provider contact during the program (Figure 1). The intensive time frame optimizes patient iterative feedback and allows for the rapid modification of treatment strategies by the care team. All clinical providers meet formally twice weekly in interdisciplinary team rounds to share information and updates. Intensive evaluation and treatment of headaches and neurological, vestibular, musculoskeletal, optometric, and audiologic disturbances are initiated to address somatic complaints. Comprehensive neurocognitive assessment with cognitive rehabilitation modules, occupational therapy, and speech language pathology addresses cognitive disturbances, including the common complaints of concentration, memory, and language disturbances, and the sleep laboratory provides full sleep assessment and treatment. The program utilizes integrative medicine (IM) approaches to reach patient treatment goals of emotional regulation. The IM offerings have two main components: first, creative arts modalities including the art therapy technique of mask making, music therapy, and therapeutic writing are introduced to aid in BH assessment and treatment that assist with externalization of previously unreported existential trauma (25) and, second: mind-body techniques, including yoga, meditation, imagery, Tai Chi, nutrition, acupuncture, and animal-assisted therapy, are offered to help the patients learn self-regulation strategies to mitigate against the effects of autonomic disturbance (26). Time is available in the 3rd and 4th weeks of the program to schedule additional sessions in those IM offerings which the patients find most efficacious. An average of 30 h is spent in IM techniques during the 4-week program, including an average of 9.6 h in creative art therapy. Up to 15 1-h patient and family educational modules are integrated into the evaluation and treatment phases of care to enhance the understanding of the disease state, improve compliance by conveying the value of treatment, and increase self-advocacy following discharge. These educational modules also include an introduction to the biological effects of TBI and operational stressors, sleep hygiene and management, nutrition, exercise triggers, and neurocognitive training (Figure 1). At the completion of the IOP, the NICoE primary care team lead and patient engage in a "warm handoff " teleconference with the home-based PCP and nurse case manager to review a full set of findings and clinical recommendation. The PCP assumes lead on follow-up care. Cross-sectional assessments of long-term follow-up after SM's return to their duty station were collected through telephone interviews and/or using a custom module on the Wounded, Ill, and Injured Registry (WIIR) that provides a secure access to the electronic scale questionnaires. All patients were queried through the automated WIIR system at 1, 3, and 6 months after discharge from the IOP. All SRS data used in this analysis were collected through the WIIR system. Infrastructure To effectively deliver integrative medicine treatments and assessments, a dedicated facility that employed best practices of an optimal healing environment was designed, including maximal natural light in common spaces, use of curved walls and wood tones, and creation of environments for art and music therapy, yoga, and acupuncture (26). Furthermore, the facility was designed to co-locate key disciplines to facilitate the regular discussion of patient care plans and progress through formal and informal meetings. In addition, the facility was designed with an informatics technology capability for the systematic collection of evaluation, treatment, and outcome data elements to support practice-based evidence (PBE) analysis. Study Design The study followed a longitudinal design, using a pre-/post-test analysis, to assess IOP efficacy for reducing symptoms among patients with mTBI and co-morbid PH conditions. The patients with completed assessments at both admission and discharge were included in the primary analysis. For the long-term follow up, the patients were required to have completed assessments at both admission and discharge and then responded to at least one or more of the 1-, 3-, or 6-month follow-up encounters to be included in the analysis (Figure 2). Symptom measures for all consented subjects, regardless of initial symptom severity score, were included in the analysis. Since this group would then include patients who had never had those symptoms or condition, we performed a response-totherapy analysis for patients who met the following symptom severity thresholds-PCL-M ≥35 (36), SWLS ≤19 (38), GAD-7 ≥10 (31), PHQ-8 ≥5 (30), ESS >10 (32), and HIT-6 ≥50 (33)based on previous reports ( Table 1). Patients who scored at or above the threshold at admission for the specific measures were included in the analysis for that outcome measure (Figure 2). For the SWLS, scores within the range of 20-24 indicate general satisfaction; therefore, a cutoff of 19 or below was used to indicate significant dissatisfaction (38). The NSI does not have a composite threshold for symptomatic classification, and therefore an NSI change score was used. Data Analysis Patient's response to treatment was defined as the difference between admission and discharge score for each assessment scale. The response was classified as clinically improved, improved, or did not improve. Clinically improved was defined as point changes between admission and discharge scores and based on available literature: NSI, ≥5-point change (34) Table 1). For the PCL-M, there is evidence that a five-to 10-point change represents a reliable change and a ≥10-point change represents a clinically significant change (37). The term improved relates to those showing a change in score for each scale that indicated recovery but did not reach a clinical threshold. Assessment score normality was determined using the Shapiro-Wilks test and Q-Q plots. Nonparametric analysis was used to assess outcome differences at discharge and at 1, 3, and 6 months after discharge. Mean ranked differences on all Objective: to identify probable cases of generalized anxiety disorder (GAD) and assess symptom severity in GAD Self-report questionnaire used to screen and measure the severity of generalized anxiety disorder. Patients rate the severity of seven symptoms and indicate their occurrence within the previous 2 weeks (31,40,41) Seven items, four-point Likert scale: 0 (not at all), 1 (several days), 2 (more than half the days), and 3 (nearly every day) Global score ranging from 0 to 21, higher score = higher severity of anxiety symptoms Symptomatic range: ≥10 Clinically improved: ≥5-point change Patient Health Questionnaire (PHQ)-8 Objective: for the assessment of mental disorders, functional impairment, and recent psychosocial stressors Self-report used to screen and diagnose depression, anxiety, and alcohol and eating disorders. The PHQ-8 has eight questions, whereas the PHQ-9 contains a ninth question about suicide ideation (30,42,43) Eight items, three-point Likert scale Global scores ranging from 0 to 24, lower score = better QoL Symptomatic range: ≥5 Clinically improved: ≥5-point change Epworth Sleepiness Scale Objective: to measure a subject's usual level of daytime sleepiness or average sleep propensity Self-report questionnaire used to quantify daytime sleepiness. Patients rate their chances of dozing off or falling asleep while engaged in different activities (32,44) Eight items, four-point Likert scale Global score ranging from 0 to 24, higher score = higher sleepiness Symptomatic range: >10 Clinically improved: ≥2-point change Headache Impact Test-6 Objective: to assess the impact of headaches Self-report questionnaire designed to provide a global measure of the impact adverse headaches have on normal daily life and ability to function (33,45) Six items, five-point Likert scale: 6 (never), 8 (rarely), 10 (sometimes), 11 (very often), and 13 (always) Global score ranging from 36 to 78, scores by items ranging from 6 to 13, higher score = greater impact on the QoL Symptomatic range: ≥50 Clinically improved: ≥8-point change measures were compared for score changes from admission to discharge and at 1, 3, and 6 months using a Wilcoxon signed-rank test. Bonferroni corrections were used to control for potential inflated family-wise error rates following multiple comparisons. The effect size of changes from admission was computed using the formula r = Z/ √ N (47). RESULTS A total of 1,456 patients consented to the study (Figure 2). The study population was 98.4% male and 1.6% female, with a mean age of 38.3 years (SD = 7.1). The majority were members of the Navy (51.5%) and Army (30.6%) and had a mean service record of 17.3 years (SD = 7.0). The majority (90.0%) of SMs had a history of multiple TBIs (M = 7.0, SD = 8.3) ( Table 2). Wilcoxon signed-rank tests revealed statistically significant improvements for symptomatic patients following treatment in the IOP for each of the seven assessments (Figure 3). The median score on the NSI decreased from admission (Md After treatment in the 4-week IOP, patients whose symptom severity was at or above threshold at admission showed clinical improvements at discharge in each of the seven assessments: NSI (77%), PCL-M (57%), SWLS (53%), GAD-7 (72%), PHQ-8 (55%), ESS (72%), and HIT-6 (33%) (Figure 4). In addition, an analysis of all patients, regardless of presenting symptom severity, showed improvements across each of the seven assessments (Figure 4; Supplementary Figure 1). For all assessments except HIT-6, clinical improvements were more likely to occur in patients with symptoms of the greatest severity. Assessment scores from patients at 1, 3, and 6 months revealed the durability of outcomes ( Table 3). The NSI median scores at admission were significantly higher (p < 0.0001) compared to patient-matched follow-up median scores at 1 month (Md , with medium effect sizes. The GAD-7 median scores at admission were significantly higher (p < 0.0001) than the patient-matched median scores at 1 month (Md = 10.5), 3 months (Md = 9), and 6 months (Md = 11), with medium effect sizes. The PHQ-8 median scores at admission were significantly higher (p < 0.0001) than the patient-matched median scores at 1 month (Md = 7), 3 months (Md = 7), and 6 months (Md = 8), with medium effect sizes. The ESS median scores at admission were significantly higher (p < 0.0001) than the patient-matched median scores at 1 month (Md = 10), 3 months (Md = 11), and 6 months (Md = 12), with large to medium effect sizes. The HIT-6 median scores at admission were significantly higher (p = 0.0001) than the patient-matched median scores only at 1 month (Md = 58.5), with a small effect size. The follow-up by electronic questionnaire was performed after discharge to obtain scores on the seven self-report scales (Supplementary Table 1). The average follow-up rate across all assessments was 15% at 1 month, 11% at 3 months, and 10% at 6 months. To address the concern that only those who had the best recovery would return the electronic followup self-report scales at 1, 3, and 6 months, the follow-up percent of patients who clinically improved during the 4-week IOP was compared to the follow-up percent of patients who did not have clinical improvement (Supplementary Table 1). No significant difference was found for any scale at any time point. To test for potential selection bias in the longitudinal crosssection data, we compared the characteristics of those who responded to the follow-up self-report scales with those who did not respond at each of the three follow-up time points (Supplementary Table 2). Overall, the demographics of those who responded after discharge were comparable to the full study cohort at baseline. Notable exceptions are mean age, which was recorded at 40.5 (SD 6.4), 40.8 (SD 6.1), and 40.6 (Sd 5.7) at 1, 3 and 6 months, respectively, compared to 38.3 (SD 7.1) for all participants, mean years of service recorded at 19.5 years (SD 6.8), 20.1 years (SD 6.4), and 19.9 years (SD 6.0) at 1, 3, and 6 months, respectively, compared to 17.3 years (SD 7.0), and rank. As for rank, officers made up 20.3% of those completing the program and 25.6, 28.0, and 25.95% of those returning the longitudinal follow up surveys. at 1, 3, and 6 months, respectively. To determine if rank imparts a differential recovery pattern, analysis was performed to compare the longitudinal improvement between officers and enlisted personnel. The findings reveal that officers and enlisted personnel have similar recovery patterns of improvement in the NSI and PCL-M at 1, 3, and 6 months (Supplementary Table 3). Additionally, the mean number of TBIs was significantly higher, mean of 8.6 (SD 11.1) and 8.9 (SD 12.8) at 1 and 6 months, respectively, compared to 7.0 (SD 8.3) TBI for all participants. Although modest in their difference, increased age, time in service, and number of TBIs would be anticipated to be associated with a less robust recovery. Thus, the current data support a positive impact of the IOP on symptom improvement durability even in a less favorable population. Furthermore, assessing recovery based on TBI exposure revealed that those SMs with a history of the greatest number of concussive events, quartile 4, had the greatest improvement in the NSI during the IOP (mean change of 15.46) compared to quartile 1 (mean change 12.47, p = 0.005). All other self-report indices had similar recovery patterns between the 1st and 4th quartiles (Supplementary Table 4). DISCUSSION In the military, exposure to blast and blunt force trauma and operational stressors from combat and mission training events related to the OEF/OIF conflicts resulted in an unprecedented number of military SMs presenting for treatment with these disorders. Despite conventional rehabilitation therapies, many SMs are deemed "unfixable" and forced to disengage from full with integrative medicine significantly improved the symptoms in patients with combat-related mTBI and comorbid PH conditions who were not responding to conventional therapies at the time of their referral to the NICoE. This study addressed three needed steps for advancing care in the TBI field: first, the applicability of delivering a complex treatment strategy over a sustained period in a large patient population; second, the need to address the simultaneous treatment of multiple domains; and third, the need to address the durability of recovery. The program has cared for over 1,500 service members in all branches of the military, including both enlisted and officers, during the 7.5-year study period. Recovery was similar in all major groups of military service. The model of care significantly eliminates the fragmentation and reductionistic approach to medical care that challenges typical healthcare delivery in complex cases. The sustained care delivery and continuous benefits in each cohort demonstrate the feasibility as well as the efficacy of the model. Findings from the study revealed a significant improvement across multiple clinical domains and importantly demonstrated durability. The Institute of Medicine (2014) recommends that research in the field of traumatic brain injury should emphasize on the clinical importance of using multiple validated rating scales to assess changes in co-morbid conditions (48). Our choice of measures assessed a broad range of symptoms across the psychological, physical, and cognitive domains. Future studies are needed to assess if there is a hierarchy of conditions that would dictate the optimal treatment sequences for a complex disease state. Furthermore, though a significant level of improvement across multiple domains was seen in most patients, some subjects failed to show an improvement in one or more of the scales (range from 9 to 19%). Additional work is needed to better understand differences in those that responded and those that did not. The improvements noted in this study support that the care model results in sustained benefit. The 1-, 3-, and 6-month crosssectional follow-up of patients after they returned to their duty station revealed a sustained improvement across most outcome measures, except headache at 3 and 6 months ( Table 3). These findings support the durability of the benefits from this care model even after the SMs return to their prior work environment. Whereas, it may be speculated that a 4-week hiatus from the service members' work environment may have contributed to the initial recovery from admission to discharge, the brief departure alone is insufficient to account for the durability of the responses seen from discharge to 6 months. These changes are especially notable given the chronicity of the population. SMs referred to the NICoE reported months to years of persistent neurological and behavioral disturbances before attending the program. In consideration of a potential selection bias in longitudinal assessment, the severity of symptoms was not different among the patients who responded and the entire cohort. Furthermore, our data support that those who were inclined to answer the followup questionnaires were older and had more TBIs. The SM with these characteristics would have been anticipated to have a less robust recovery response, and therefore the favorable outcomes in the follow-up data are likely to represent a meaningful recovery that can be extrapolated to the larger cohort. The factors that contribute to the sustained improvement are postulated to be related to the foundational principles of the care model. Elimination of fragmented care through the interdisciplinary care plan and patient-centric team feedback that promotes rapid iterative treatment assessments could enhance the opportunity for multiple simultaneous domain improvements. The establishment of trust between the patient and care team and the improvement of sleep disturbances and pain as an initial program goal may provide the supportive non-judgmental environment and physiological restoration, respectively. This warrants further study. Although conventional psychotherapeutic treatments for PTSD were not used during the IOP, emphasis on integrative medicine techniques appeared to have a positive effect on BH conditions. Offerings such as art therapy, which was endorsed by SMs as extremely beneficial, are reported to leverage the externalization of the fragmented trauma narrative (25,49), allowing the art therapist and other BH providers the ability to guide the patients' processing of the traumatic events, and will be the subject of future research. Programmatic emphasis on self-efficacy skills-based training and disease-specific educational module, also endorsed by SMs, may further contribute to longitudinal recovery strategies. Followup studies to identify which offerings were most helpful postdischarge are planned. The data regarding the association of chronic effects of multiple TBI suggest that even SMs with a high number of exposures can experience improvement. Future study must include detailed trauma history and characterization to better understand the population risk and response to different treatment strategies. Finally, in addition to the significant response to treatment, the program was extremely well-tolerated, with no patients leaving the 4-week program over the 7.5 years, a 100% completion rate. Only one of the 1,456 SMs withdrew his consent to be reached electronically after the program completed. This is in contrast to reported dropout rates of 36% or higher in conventional PTSD treatment (50). This intensive interdisciplinary integrative medicine paradigm is being adapted and implemented in 10 military treatment facility programs based on their individual patient's needs, staffing availabilities, and diagnostic capabilities. The program synchronization will provide a previously unavailable opportunity to continue to refine our understanding of mild TBI and comorbid PH conditions in active-duty SMs and assess the response to precision therapeutic strategies. Study Limitations and Way Forward There are several limitations that provide guidance for future studies. This study uses a PBE analysis without a control group, owing to the real-world application of care, to not withhold the next level of treatment for SMs who were not on a trajectory of recovery despite conventional care. As a tertiary care facility program, the SMs were referred specifically due to the persistence or worsening of symptoms, and the study population had well-documented chronic neurological and psychological conditions prior to admission, arguing against spontaneous recovery. Future studies that leverage a wait list control group or use a variable model of treatment sequencing may provide greater confirmation of the program's efficacy and insight into the program's treatment strategies that produce the most benefit based on subpopulation presentation. Another limitation is that the favorable response at follow-up could have been biased by the possibility that only those demonstrating a robust recovery would report their 1-, 3-, and 6-month scales. Based on their trajectory of recovery at discharge, we assessed the percent of patients responding at all follow-up time points for all self-report scales. The rate of returned SRSs was equivalent when comparing those who were clinically improved and those who were not at program discharge (Supplementary Table 1). Furthermore, though the study reveals a significant durability of outcomes up to 6 months, information regarding recovery at extended periods beyond that timeframe would be of significant utility. Also, since many of the skillsbased techniques were endorsed by SMs during the IOP as beneficial, follow-up studies should also include information regarding patient endorsement of those treatments that help maintain a satisfactory recovery. Accessing patients by telephone follow-up, anticipated to obtain this type of information, was extremely limited due to the majority of SMs returning to high operational activity. In addition, a more precise characterization of concussive event "dosing" would be helpful in identifying subpopulations for analysis of natural history and treatment response in the future. Number of events, blunt vs. blast vs. mixed concussions, timing of events, and physiological and emotional condition at the time of events can all play a role in brain injury severity. Reliable data collection strategies and models for analysis of these complex interaction remain a gap. Given the benefits seen in the primary results of the study, future analyses to determine predictive factors from assessment modalities in the individual disciplines are being planned. Though advances in MR imaging are promising for identifying TBI changes, correlation with patterns definitive for chronic mTBI remains elusive. Future studies are planned for correlative assessment of MRI signal and specific injury pattern in this population. Finally, though not specifically assessed in this study, cost modeling of an intensive interdisciplinary program is needed to assure the sustainability of this paradigm of care delivery. More recent constructs of an integrated practice unit have been explored as a model for disease-based care and warrant further exploration as a long-term strategy for programs that demonstrate significant clinical benefit (51). CONCLUSIONS Our findings support that an interdisciplinary IOP that combines conventional rehabilitation therapies with integrative medicine techniques significantly improves a range of symptoms and holds promise for sustainable recovery among SMs suffering from co-morbid combat-related mTBI and PH conditions. These findings underscore the value of coordinated care delivery in complex brain injury and emphasize the importance of using symptom-specific outcome tools to assess efficacy in specific clinical subpopulations. Future studies should consider methodologies that lend themselves to identifying those active components of this rehabilitative care model that seem to account for the most variance in positive and sustainable treatment outcomes for service members with a history of mTBI and psychological health conditions. DATA AVAILABILITY STATEMENT The original contribution presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Walter Reed National Military Medical Center Institutional Review Board (IRB). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS TJD, GN, and JB conceived and planned the study. TJD, RK, JLB, GN, and WP planned and carried out the treatment model. KW and TAD contributed to the analysis and designed the tables and figures. TJD, LF, TP, JK, GG, TAD, KW, JLB, WP, and JB contributed to the interpretation and writing of the manuscript. All authors contributed to the article and approved the submitted version.
2021-01-18T14:17:19.063Z
2021-01-18T00:00:00.000
{ "year": 2020, "sha1": "8241b9c5edfaa9a37e221e4546786b0c5a048791", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.580182/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8241b9c5edfaa9a37e221e4546786b0c5a048791", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250692883
pes2o/s2orc
v3-fos-license
Spontaneous parity violation in a quantum spin chain We report on a spontaneous breakdown of parity in the ground state of a spin Hamiltonian involving nearest-neighbor interactions. This occurs for a one-dimensional model where spins transform under the gauge field representation of QCD, the eight-dimensional adjoint representation of SU(3). The ground state spontaneously violates parity and is two-fold degenerate. In addition, the model possesses a non-vanishing topological string order parameter which we explicate analytically. Introduction Symmetries and conservation laws constitute an essential ingredient of physical theories [1]. An intriguing situation arises when a given microscopic model Hamiltonian is known to be symmetric under a certain type of transformation, but the expected regularity associated with this symmetry does not manifest itself in the physical properties. Beyond anomalous breaking, the only other mechanism known for this type of event is spontaneous symmetry breaking. There, the ground state of a system does not share the symmetries of the Hamiltonian. For continuous symmetries, a prominent example is the ferromagnetic phase in the Heisenberg model, where the ground state breaks the SO(3) rotational symmetry by singling out one certain magnetization axis. In this article, we consider the discrete parity symmetry P. Parity symmetry breaking (PSB) for the weak interaction has been a milestone in particle theory [2]. In the theory of magnetism, PSB has emerged in different contexts for higher dimensions, and is generally referred to the real space component of chiral symmetry breaking [3], which additionally includes time reversal symmetry breaking T . However, it has remained elusive in most cases to define a model which spontaneously generates PSB. In the spin chain model which we discuss here, the algebraic structure of the adjoint representation of SU(3) at each site of the spin chain allows to accomplish this. In this contribution, we analyze the degenerate ground state manifold of this model and study some low-energy properties. Furthermore, we observe that the model possesses a topological string order which we calculate exactly. The model and its properties To construct a valence bond state with spins transforming under the eight-dimensional adjoint representation 8 of SU(3), we place a fundamental representation 3 and an anti-fundamental representation3 of SU(3) on each lattice site. We then project the resulting tensor product onto the symmetric subspace, which yields the adjoint representation: S{3 ⊗3} = 8. We generate an overall spin singlet state by coupling each representation 3 antisymmetrically with a representation3 on the neighboring site into a singlet bond: A{3 ⊗3} = 1. This construction yields two linearly independent representation 8 states, Ψ L and Ψ R , which may be visualized as e e e e e e e e n n n n n n n n n one site and its parity conjugate obtained by interchanging fundamental (small circles) and antifundamental representations (larger circles). The big circles indicate a lattice site and the horizontal lines between the sites are singlet bonds. As the construction is analogous to AKLT states [4], we may write the state vectors Ψ L and Ψ R as matrix product states. Taking (b,r,g) and (y,c,m) as bases for the representations 3 and3, respectively, we obtain the matrix [5] Assuming periodic boundary conditions (PBCs), the representation 8 states Ψ L and Ψ R are hence given by the trace of the matrix products These states transform into each other under space reflection or parity symmetry: PΨ L = Ψ R , where the discrete parity transformation is defined as P : Here S i is an eightcomponent spin operator corresponding to the eight-dimensional representation of SU (3) and N denotes the number of lattice sites. Eigenstates of the parity operator with eigenvalues ±1 are given by the even (symmetric) and odd (antisymmetric) superpositions Ψ ± = 1 √ 2 Ψ L ± Ψ R . A parent Hamiltonian which annihilates the states Ψ ± is given by [6] The parity operator P commutes with the Hamiltonian, [H, P] = 0, while the ground states Ψ L/R or Ψ ± spontaneously violate this symmetry. The model (2) and its entropy were studied by Katsura et al. for open boundary conditions (OBC) and fixed dangling spins [7], while similar models with other symmetries were introduced in Refs. [8,9,10]. Starting from (2), it is natural to ask what happens for a representation 8 Heisenberg model with pure bilinear interactions S i S i+1 . We find numerically that the ground state degeneracy is lifted when the prefactor of the biquadratic Heisenberg interaction is decreased from 2/9 to 0 (see Fig. 1). Preliminary finite size scaling data indicate that the splitting between the two lowest lying states of the Heisenberg model vanishes as we approach the thermodynamic limit N → ∞, as required for a spontaneous symmetry violation. Above the two degenerate ground states, we expect a Haldane-type gap for the lowest lying excitations [5]. This conclusion is further supported by finite size scaling data not included here as well as the exponential decay of the static spin-spin correlations evaluated in the following section. PBCs have been imposed for both even (N = 8, left plot) and odd (N = 9, right plot) number of sites. The spectrum of the Heisenberg model differs considerably for even and odd number of sites. In the region around the exact model (2), α = 2/9, the spectra become similar. As we move from (2) to the Heisenberg point, the ground-state degeneracy is lifted. For an even (odd) number of sites Ψ + (Ψ − ) becomes the ground state for 0 ≤ α < 2/9. String order parameter In analogy to Refs. [10,11,12], we define the string operators With the matrix product representations (1), the string order can be calculated using the method introduced by Klümper et al. [13] for the q-deformed model. The analysis proceeds in several steps. First, we calculate the norm of the matrix product state Ψ L . We introduce the complex conjugated matrixM according toM σσ ′ = M * σσ ′ , i.e., by taking the complex conjugate of each matrix element of M i without transposing the matrix M i . We then define the 9 × 9 transfer matrix R at any lattice site as where we order the indices as α, β = 1, . . . , 9 ↔ (11), (12), . . . , (33). Finally, the norm is given by where have evaluated the trace by diagonalization of R. Second, we introduce the transfer-matrix representation of the spin operators J 3,8 bŷ Third, we introduce the operator A =M e i π(J 3 +2J 8 / 3) M such that In the same way we obtain in the limit N → ∞ as well as O ab 1j R = O ba 1j L . Note that the expectation values of the string order parameters remain finite in the limit j → ∞. In analogy to the original AKLT model [14], we attribute this string order as well as the 18-fold degeneracy of the ground state of a chain with open boundary conditions to the violation of a discrete symmetry. Obviously, by choosing A = R, one obtains the static correlation functions J a 1 J b j = −δ ab (−1) j 27 2 8 −j , i.e., we find exponentially decaying correlations with correlation length ξ = 1/ ln 8. Generalization to SU(n) The model (2) is a special case of a model of SU(n) spins transforming under the n 2 − 1 dimensional adjoint representation S{n ⊗n} with Hamiltonian The ground state is again two-fold degenerate and exhibits exponentially decaying correlations with ξ = 1/ ln (n 2 − 1). Note that n 2 − 1 is just the dimension of the adjoint representation. We conjecture that the corresponding Heisenberg model has a two-fold degenerate ground state due to the broken parity symmetry as well. Conclusion In this contribution, we have established an example of spontaneous parity violation in a quantum spin chain. We further identified a non-local string order parameter for the model considered.
2022-06-28T03:41:57.492Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "06f62eced05ff2ecfb9f3bcb774170d38d32b6eb", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/200/2/022049", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "06f62eced05ff2ecfb9f3bcb774170d38d32b6eb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225166812
pes2o/s2orc
v3-fos-license
My tutor doesn ’ t say that ” : The legitimized voices in dialogic reflection on teaching practices In the construction of teachers’ professional knowledge, reflective practices are a fundamental tool that responds to the need to connect theoretical principles with practical resources and to the improvement of teaching by means of critical analysis. The Practicum, as a dialogic structure for the explanation and interpretation of teaching practices, provides teachers in training an opportunity to build their own understanding based on dialogue and reflection. Invocation is one of the resources used to legitimize scientific or disciplinary knowledge in joint reflection. Qualified voices are called and made present in classroom discourse to validate descriptions or explanations. We are interested in defining the profile of the invocations introduced in dialogic reflection, as sources of legitimation of knowledge, and identify the patterns in the sequence of the invocations' appearance. This work consists of an exploratory study of multiple cases, in which each case is a classroom unit composed of a tutor and her student teachers. Two cases from the Practicum in a Primary Education Teacher Degree were selected. A category system was developed for the analysis of invocations and organized into four dimensions: academic or professional knowledge, experiential knowledge, invocation of truth, and invocation of ideology or values. Results allow us to highlight some relevant conclusions. Invocations are a widespread resource in a process of dialogic reflection to legitimize the interpretation of educational practices. The participation of student teachers in dialogic reflection is possible and abundant thanks to the experience of the Practicum, which provides a validity criterion for their arguments, supported by the invocation to the authority of teaching experiences. In this study, tutors’ efforts to connect pedagogical principles with personal experiences in the Practicum have not clearly translated into student reflections in the same direction. The paper finishes paying attention to the competencies and training that Practicum tutors need. Introduction In the construction of teachers' professional knowledge, reflective and dialogic practices are a fundamental tool that responds to the need to connect theoretical principles with practical resources and to the improvement of teaching by means of critical analysis (Clarà & Mauri, 2010;Cubero-Pérez, Cubero & Bascón, 2019;Feiman-Nemser, 2001;Mauri, Cubero, Bascón, Colomina, Cubero, Jiménez & Usabiaga, 2015). The development of professional practical knowledge (Fenstermacher, 1994;Schon, 1983) supported by the teaching experiences of teachers in training has been examined from a variety of strategies (Darling-Hammond, 2017;Korthagen, 2010), with intervention in Practicum being one of the pillars on which recent research is based. The Practicum provides teachers in training an opportunity to build their own understanding of teaching based on dialogue and reflection on the practical dilemmas found in their practice (Cuenca, 2010). We think, then, of teacher training as an activity situated in specific cultural practices, where student teachers develop an identity, appropriate a discourse and methods to define problems, which will allow them to be active members of their community of practice (Matusov & Hayes, 2002). A crucial fundamental element in the Practicum is the figure of the academic tutor, a socializing agent, a figure that supports and legitimizes the construction of knowledge about teaching (Cubero-Pérez, Cubero & Bascón, 2019;Cuenca, 2011). "My tutor doesn't say that": The legitimized voices in dialogic reflection on teaching practices Mercedes Cubero, Miguel Jesús Bascón, Rosario Cubero-Pérez Dialogic Pedagogy: An International Online Journal | http://dpj.pitt.edu DOI: 10.5195/dpj.2020.311 | Vol. 8 (2020) SA28 Tutors develop a fundamental role in legitimizing professional knowledge and, at the same time, conferring legitimacy on students in the teaching community, accepting their contributions in an expert discourse, and helping them feel like teachers (Feiman-Nemser, 2008). But teacher education based on practical knowledge requires more than just telling teachers in training how or what to teach (Cuenca, 2010). Taking into consideration the importance of the tutor in the legitimization of practices and the socialization of teachers in educational intervention, the selection and training of the tutors becomes a fundamental aspect of the curricula (Cuenca, 2011;Dinkelman, Margolis & Sikkegna, 2006). Invocation is one of the resources used in classroom dialogue to legitimize scientific or disciplinary knowledge (Cubero & Ignacio, 2011). We call invocation the resource consisting of expressing statements that rely on and take as reference different elements of academic knowledge and / or personal experience of speakers. The elements of that academic knowledge or that personal experience that are introduced in classroom discourse are used as justification for the versions that are exposed in the classroom. Thus, a qualified voice is called and made present in the dialogic teaching process, relating it to a specific description or explanation. The analysis of invocation as an argumentative process allows us to investigate how the relevant sources of knowledge are established in dialogic interactions in the classroom, and how empirical facts and theories that count as valid are constructed. Relating the use of this resource with our theoretical approaches, since learning implies appropriating a specific discourse, a form of activity in which the meaning of the experience is constructed with words (Lemke, 1990), the investigation of invocations shows the way in which academic knowledge is constructed versus everyday knowledge or other types of knowledge (Candela, 1999;Edwards, 1993;Hatano & Inagaki, 1991). This analysis, in addition, deepens in the uses of discourse according to some own rules of the formal educative contexts (Edwards, 1993;Edwards & Mercer, 1987;Candela, 1999), as well as in the use of the criteria that science uses to legitimize a certain explanation instead of other possible ones. Therefore, the legitimate versions of the events offered or built in the classroom are those that are expected to become the knowledge shared by the classroom community. In this sense, the analysis of the resource called invocation collaborates in the explanation of how a shared knowledge is constructed in dialogic classrooms. Also, taking up the argument of the importance of the selection and training of the tutors of the Practicum, the analysis of the invocations allows to define an intervention profile in which the tutor contributes to the socialization of the students in the teaching practices through the participation in the legitimization of their own practices. Regarding the functions of the invocation, we can affirm that they are: 1) establish knowledge as a valid version, as a scientific, academic or culturally acceptable description-explanation; 2) offer elements that support and justify a certain version of knowledge; 3) define or describe a situation (concept, explanation, activity, experience) in the classroom; 4) control and / or direct an activity or experience that takes place in the classroom; 5) and display the criteria that science, discipline or experience use to legitimize a given explanation instead of other possible ones (Cubero & Ignacio, 2011). Invocations are closely related to the concept of Bakhtin's voice (1981Bakhtin's voice ( , 1986 and to the dialogic structure of discourse The analysis of educational discourse reveals that it is polyphonic and that it contains numerous voices which refer to different perspectives and movements in the process of knowledge construction, as well as different roles played by the teachers and students (Cubero & Ignacio, 2011). Cazden (1993) and Wertsch (1900Wertsch ( , 1991 have paid special attention to the concept of Bakhtin's voice in relation to the school context, since the school presents some voices as better or as privileged forms compared to other discursive forms. Being respected, being heard in a classroom implies speaking with the "My tutor doesn't say that": The legitimized voices in dialogic reflection on teaching practices Mercedes Cubero, Miguel Jesús Bascón, Rosario Cubero-Pérez Dialogic Pedagogy: An International Online Journal | http://dpj.pitt.edu DOI: 10.5195/dpj.2020.311 | Vol. 8 (2020) SA29 voice of truth (that of official science, for example). A voice that is used by the teacher to establish the truth and hold a position of authority. On the other hand, the invocation as a resource to build a valid and culturally accepted explanation is related to other resources that appear in the dialogue and reflection of the classroom. Candela has described the classroom as a place where legitimized versions of the facts are constructed, which through the intervention of the teaching staff aspire to be shared knowledge. Some of these discursive procedures through which the legitimacy of knowledge is established are argumentation, the search for consensus, analogies, the recourse to perceptual evidence and the authority of specialists (Candela, 1999). Through the analysis of discourse in these studies it is shown how scientific facts are constructed in the classroom. The author is also interested in analyzing the active role of students in these processes, who use their experience to participate in the dynamics of the classroom, whether or not they are invited to do so. The demand for answers by teachers can cause a type of student participation focused on the elements of teacher's discourse -on the clues she gives, the examples she uses, the way she talks about the facts -but on other occasions it is essential that students contribute with their experience and meanings, and thus be part of the classroom discourse (Candela, 1999;Cubero-Pérez, Cubero & Bascón, 2019). Faced with the argumentative activity of the students, teachers can perform different movements. They can block students' access to shared classroom dialogue, for example, when they do not incorporate their ideas into joint construction. Teachers can also incorporate the students' arguments into the discussion, identifying them as a valid source of authority and leaving different possibilities of understanding open, without there being a single legitimate version of knowledge, but rather a plurality of authorized voices (Candela, 1999). In this study we are interested in the joint reflection which is developed in the subject of Practicum, as a dialogic structure for the explanation and interpretation of teaching practices. Also, in the framework of this reflection, we are interested in the study of invocation as a means of legitimizing what and how to teach. This will allow us not only to describe the sources of validation, but to contrast epistemologically different types of knowledge, and to explore the agency of teachers in training in the definition and resolution of the dilemmas posed by educational intervention. The specific objectives of this study are to define the profile of the invocations introduced in dialogic reflection and identify the patterns in the sequence of the invocations' appearance. Participants This study is part of a largest research project entitled "Aid for the construction of knowledge in the Practicum of teachers: Joint reflection to improve the theory-practice relationship" (http://www.mineco.gob.es/). It consists of an exploratory study of multiple cases (Yin, 2009), in which each case is a classroom unit composed of a tutor and her student teachers. Concretely, two cases were selected (named case F and case H). These are two classroom units that were integrated into the activities developed in the subject of the Practicum in a Primary Education Teacher Degree. These were composed of a tutor and 15 students, and a tutor and 9 students, respectively. Materials The proposed activity was that student teachers individually wrote a description of a situation experienced in their teaching pre-service practices. The situation should be described avoiding interpretations and would be lately discussed in the Practicum classroom. Table 1 shows the content/topic "My tutor doesn't say that": The legitimized voices in dialogic reflection on teaching practices Mercedes Cubero, Miguel Jesús Bascón, Rosario Cubero-Pérez Dialogic Pedagogy: An International Online Journal | http://dpj.pitt.edu DOI: 10.5195/dpj.2020.311 | Vol. 8 (2020) SA30 of each situation described by student teachers. It should be noted how in case F the number of situations treated is doubled (2 in each session), compared to case H. Table 1. Contents/topics of the situations described by student teachers in a written narrative. CASE F CA CASE H s1. Controversy over the decision of the management team regarding children repeating grades. s1. Controversy over the decision of the management team regarding children repeating grades. s2. Refusal of the parents of a child with Down Syndrome to receive support from the school. s2. Differences between the teacher and the student teacher about working with an ADHD child. s3. Measures a teacher takes with a child who lies by saying that she receives threats from two classmates. s3. Discrepancy between the teacher and a mother about the inappropriate behavior of a child in classes. s4. Xenophobic behavior of a child and decisions taken by the school. s4. Conflict resolution by a teacher in a fight between three children, one of them with ADHD. s5. Learning problems of a girl with an unstructured family due to parental divorce. s5. A teacher withdraws the ABN method of teaching mathematics by cognitive dissonance. s6. A child who needs reinforcement classes outside the classroom. s7. Discrepancy between a teacher and the parents of a child with behavioral and school problems. s8. Collaborative work agreement between the teacher and the parents of a violent and aggressive child. s9. A child with special educational needs that is inadequately treated due to lack of resources. s10. A child with low performance and following a program to reinforce learning and integration in the classroom. Procedure The study consisted of several phases: Phase I. Tutor's and cases selection. In this study, we selected those cases that we believed would provide the most abundant information on educational aids. The sample selection process corresponds to what Goetz and LeCompte (1984) call criterion-based selection and Patton (1980) calls purposeful sampling. For the selection of the tutors, and therefore of the case studies, an initial in-depth interview was carried out, in order to explore their didactic and teaching strategies as well as the importance given to reflection within them. Their interest and predisposition to participate in a project of this nature were also "My tutor doesn't say that": The legitimized voices in dialogic reflection on teaching practices Mercedes Cubero, Miguel Jesús Bascón, Rosario Cubero-Pérez Dialogic Pedagogy: An International Online Journal | http://dpj.pitt.edu DOI: 10.5195/dpj.2020.311 | Vol. 8 (2020) SA31 taken into account. Likewise, according to the methodological approach of the study, the choice of tutors was based on the fact that they are promoters of good educational practices and with extensive teaching experience in the Practicum, according to the assessment from the university's on teaching staff. Phase II. The activity in the classroom was intended to help students connect their experience in school practice with academic knowledge, through joint reflection between the students themselves and the tutor. Thus, previously to the seminar, the students had to write individually a situation that they had experienced in their practices and that would have particularly caught their attention. The situation should be described as literally as possible, trying to avoid interpretations. More specifically, seminars were held with the students and the tutor starting with the reading of the situations by the student who had written it; in each session a student or several were responsible for presenting theirs.Then the conversation was opened to the group for dialogic reflection. The 5 sessions in which the activity of dialogic reflection between the tutors and the student teachers, of approximately one and a half hours duration, were recorded in video and audio for each case study. Phase III. Once the tutorials were concluded, a final interview was held with each tutor to collect their impressions of how the groups functioned, problems they had encountered, suggestions and their degree of satisfaction, etc. The data corresponding to Phase II, that is the tutor-students' interactions during the seminars sessions, is analyzed and shown in this study. We would like to make an observation about the nature of the method and the results. In case studies, the generalization of the data to broader contexts is not based on a statistically representative random sample, but rather on the deepening of a case or a small number of cases (Descombe, 2010;Gerring, 2007;Giménez, 2012 ;Ragin and Becker, 1992;Yin, 2009). Since our interest is to be able to explain not only what happens in a specific case, but what this implies for the explanation of teacher training processes, the cases have been selected based on their analytical generalization (Yin, 2009) according to the criteria for strategic selection of critical and typical cases (Descombe, 2010;Fkyvbjerg, 2001). Thus, the generalization of the results does not refer to the concrete frequencies, but the conceptual model that emerges from the analysis. Our study, therefore, aims to successfully analyze a small number of cases and develop a model that can be extended and serves to analyze the construction of professional knowledge during the Practicum. Category System The data processing and its organization were carried out through the use of a category system based on previous studies (Cubero & Ignacio, 2011) and reworked specifically for the research project to which this work belongs. The unit of analysis that was used for the coding and delimitation of discursive extracts was the turn of intervention of each participant in classroom dialogue. The category system was developed for the analysis of invocations or knowledge validation sources. These categories are organized into 4 dimensions: academic or professional knowledge, experiential knowledge, invocation of truth, and invocation of ideology or values ( Table 2). The first refers to interventions that introduce the voice of academic or professional knowledge in dialogic reflection as a way to establish the correct knowledge or what counts as the correct one. The second includes the interventions through which the individual, collective or cultural experience of the participants are introduced. The third includes assertions that appeal to absolute truths, which are considered expressions of truth, and sometimes do not incorporate an empirical argument or justification. Finally, the fourth dimension contains ideological issues related to a specific value system. My friends who study sciences, laugh when I told them that Pedagogy is also a science. IEG Group-session experience Utterances that take as a reference a collective idea which is shared by the participants because it has been generated in classroom microcosm and dialogue. As Hugo is saying, and we defended before, I believe that we have to create the conflict. Utterances that take as a reference an assertion on the nature or description of an object or process. The assertion is based on a direct real-time experience and it is considered and expression of truth. If I shout loudly right now (and she shouts) you get scared, you see? IVN Not related to present experiences Utterances that take as a reference an assertion on the nature or description of an object or process. The utterance is not related to current direct experiences and it is considered and expression of truth. It is included in discourse without the need for any proof or argument in favor. You can't educate a child without punishing him when he needs it. Ideology and values IIV Ideology or value system Utterances that take as a reference the perspective of specific value systems or ideologies that are explicitly mentioned. The right to education is a right for all children, as stated in the Human Rights Letter. In the process of developing the category system, the categories were verified by each team member (three members), in several re-readings of the transcripts. The different categories identified by each member were shared and described to establish an emerging list of them, combining or dividing them SA34 according to their best ability to explain the discursive process. This mechanism involves an iterative process between created categories and re-analysis until obtaining a network of them by saturation (Clarke, 2005). Finally, a procedure was carried out to establish the reliability of the category system between observers (Cohen's kappa coefficient over 0.87; Cohen, 1960). Analysis and results The treatment of the data carried out corresponds to the identification of the innvocations in two complete work sessions of the two Practicum tutors and their respective student teachers. The sessions analyzed are the first and fifth sessions, with the aim of contrasting the intervention resources used. The calculations were made based on the frequency and percentage of each category. The results are described according to the specific objectives set out in the introduction. First, the case F data are shown, and after the case H ones. Results of case F In the overall profile of invocations (Table 3) the most used have been those referred to the authority of academic or professional knowledge (40.56%), followed by invocations to experiential knowledge (32.87%) and invocations to the truth (22.38 %). In relation to the most used concrete categories, we highlight the invocations to the experience on the didactic-educational practice (25.87%), followed by the invocations to the truth not related to present "My tutor doesn't say that": The legitimized voices in dialogic reflection on teaching practices Mercedes Cubero, Miguel Jesús Bascón, Rosario Cubero-Pérez Dialogic Pedagogy: An International Online Journal | http://dpj.pitt.edu DOI: 10.5195/dpj.2020.311 | Vol. 8 (2020) SA35 experience (18.18%) and the invocations to the formalized knowledge generated from the experience of the group-session (11.89%). This profile of invocations is different when analyzing separately the interventions of the Practicum tutor and the student teachers (Table 4). The Practicum tutor uses, almost exclusively, invocations in the authority of academic or professional knowledge dimension (84.62%). The most frequent categories in this dimension have been the invocations to the knowledge generated from the experience of the group-session (38.46%) and the invocation to the authority of the expert -an author or professional group-(28.21%). Both categories of invocations are almost exclusively present in the tutor's invocations. The teacher also appeals to formalized knowledge (15.38%). Students invoked more frequently their experience (43.27%), followed by invocations to the authority of the truth (28.85%) and, finally, the authority of academic-professional knowledge (24.04%). Invocations to the truth were almost exclusively used by the students. In relation to the most frequently used categories, we highlight the invocation to the experience on the didactic-educational practice in the classroom (35.58%), the statements about the truth that are not related to present experiences (24.04%) "My tutor doesn't say that": The legitimized voices in dialogic reflection on teaching practices Mercedes Cubero, Miguel Jesús Bascón, Rosario Cubero-Pérez Dialogic Pedagogy: An International Online Journal | http://dpj.pitt.edu DOI: 10.5195/dpj.2020.311 | Vol. 8 (2020) SA36 and the invocation of the authority of the school teacher with whom the students develop their teaching practices (10.58%) The last two categories are almost exclusively used by students. There are also changes from the first to the fifth session in the distribution of the different dimensions and categories with respect to the overall profile (Table 5). Invocations of academic knowledge are introduced in both sessions with a similar frequency and weight (from 40.70% to 40.35%). The same does not happen in the rest of the dimensions. Invocations of experiential knowledge increase from the first to the fifth session (from 18.60% to 54.39%) and invocations to the authority of truth (from 33.72% to 5.26%) and value systems (from 6.98% to 0%) are reduced. In the distribution of the specific categories used in the first and the fifth session, there is an increase in the calls to the experience on the didactic-educational practice in the classroom (from 12.79% to 45.61%), to the author's authority (from 5.81% to 14.04%) and in the invitations to the authority of formalized knowledge generated in the session group (from 10.47 to 14.04%). Invocations to a general truth unrelated to present experiences (from 26.74% to 5.26%), invocations to truths related to present experiences and invocations to values or ideologies are reduced (the latter two until their practical disappearance). If we compare both sessions for the specific case of the Practicum tutor (Table 5), the calls to the authority of an author or professional group almost tripled (from 3.49% to 14.4%). The declines focus on the categories referred to the dimensions of invocations of truths and knowledge related to ideologies. In the student teachers we observe the changes are concentrated in a reduction until the practical disappearance of the categories included in the dimensions of invocations to the truth, and to the ideologies and systems of values. In a complementary way, we observe an increase in invocations based on experiential knowledge until practically quadrupling them. The increase in this dimension is concentrated in the category of experience on the didactic-educational practice in the classroom (from 12.79% to 45.61%). The dimension referred to academic knowledge remains stable, although in the fifth session the category of use of the authority of the author or professional group increases (3.49% to 14.04%) and the use of invocations to the school teacher (9.30% to 5.26%) and the formalized knowledge (10.47% to 0.00%) decrease. Results of case H In the overall profile of invocations (Table 6), the most used have been those referred to the authority of academic or professional knowledge (52.90%), followed by invocations of experiential knowledge (30.90%). In relation to the most used categories, we highlight the invocations to the experience on the didactic-educational practice in the classroom (23.00%), followed by the invocations of formalized knowledge (18.30%) and to the authority of the academic discipline (14.70%). This profile of invocations is different for the Practicum tutor and the student teachers (Table 7). The tutor of case H, basically, appeals to the authority of academic or professional knowledge (77.78%). The most frequent categories in this dimension have been invocations to the authority of academic discipline (30.86%) and the formalized knowledge generated on the group-session (27.16%), both almost exclusively used by the tutor. For their part, the students invoked more frequently their experience (44.56%), followed by the authority of academic-professional knowledge (34.55%), and of the so-called truth (20.00%). If we analyze the production by categories, the invocation to the experience on the didactic-educational practice in the classroom stands out (39.10%). This category, moreover, is almost exclusively used by the students. This category is followed in percentages by the invitations to the voice of formalized knowledge (21.82%) and assertions of truth not related to present experiences (16.39%). In the invocation profile, depending on the session, we observe changes in the distribution of the different dimensions and categories (Table 8). From the first to the fifth session we observed an increase in the invitations to the authority of academic or professional knowledge (from 47.58% to 62.69%), and a decrease in the invocations to experiential knowledge (from 34.68% to 23.88%) and to the truth (from 16.13% to 11.94%). In the distribution of the specific categories from the first to the fifth session, the results points out an increase of the categories of invocation to the authority of the academic discipline (from 8.87% to 25.37%) and to the formalized knowledge generated from group-session (from 8.87% to 16.42%). In relation to the categories that decrease, we can refer to the invocation to the authority of the school teacher (from 11.29% to 2.99%), the invocation to the experience on the didactic-educational practice in the classroom (from 25.81% to 17.91%) and the invocation to the truth not related to present experiences (from 13.71% to 7.46%). If we observe both sessions for the specific case of the Practicum tutor, three of the four categories included in the first dimension increase (IAD from 8.87% to 20.90%; IAC from 4.84% to 7.46% and IAGP from 8.87% to 16.42%). In the students, we observe that the narrated personal experiences tend to diminish (IEG, IEP and IEC) from session first to fifth. Invocations to the authority of the school teacher and the formalized knowledge also decrease (from 8.06% to 1.49% and from 13.71% to 10.45%), as well as the invocations to the truth not related to present experiences (11.29% to 5.97%). Discussion and Conclusions Regarding the profile of the invocations incorporated into dialogic reflection, the dimensions that have been most identified, taking into account the Practicum tutor and the student teachers together, have "My tutor doesn't say that": The legitimized voices in dialogic reflection on teaching practices Mercedes Cubero, Miguel Jesús Bascón, Rosario Cubero-Pérez Dialogic Pedagogy: An International Online Journal | http://dpj.pitt.edu DOI: 10.5195/dpj.2020.311 | Vol. 8 (2020) SA41 been in order of frequency, the reference to academic or professional knowledge, the reference to experiential knowledge and the statements based on the truth. References to value systems have been very scarce. However, this profile of the characteristic invocations in a dialogic Practicum is very different when focusing on the tutor or the student teachers separately. Thus, we observe that the first fundamentally constructs her discourse through invoking the authority of experts and academic or professional knowledge, increasing this type of invocations in the last sessions analyzed. That is, the tutors take the voice of specialists or a discipline as a voice that legitimizes their perspective on what they are arguing (Cubero & Ignacio, 2011;Wertsch, 1990Wertsch, , 1993. This does not mean that students do not have a relevant role in the joint reflection process. The sessions take place with a strong presence of interventions by the student teachers, permanently invited to participate by the tutors. Their invocation profile is very active and shows a contribution to the dialogue based, fundamentally, on the teaching experience they acquire during the Practicum, combined with arguments based on academic concepts, and statements that are considered true and that are held without reference to a source of authority beyond the fact that they are introduced in discourse as an unquestionable truth. Indeed, we insist that the voice of the student teachers is possible and abundant in dialogic reflection thanks to the experience of the Practicum, which provides a validity criterion to the arguments supported by the practical evidence. Then, the fact of the teaching experience itself allows the students to have a voice and participate in dialogue. On the interactive level, this voice is possible because the tutor incorporates and legitimizes the students' voice accepting the validity of their experience (Candela, 1999;Cubero & Ignacio, 2011;Feiman-Nemser, 2008). In fact, the contrast between the first and the second analyzed session shows that the intervention profile changes in the sense we are describing, so that access to debate with authority arguments is quadrupled for cases of experiential arguments specifically linked to experiences in the course of the teaching practices during the Practicum. Consequently, the use of assertions that are introduced with the status of an unquestionable truth (a category that except in one case is only used by students), decreases radically. This means that interventions based on the voices of others (truth or academic knowledge) (Bakhtin, 1981) progress towards the student agency based on practical experience. We can affirm that the experience provides teachers-in-training with arguments that they use to legitimize a concrete perspective of educational situations. These data have led us to three conclusions and a fundamental question. Firstly, the data from case analysis allow us to conclude that the invocation is a widespread resource in a process of dialogic reflection, and that the arguments and descriptions presented to the group, either by the tutor or by the students, have associated their own sources of validation. Secondly, the emergence of arguments based on personal experience by students reminds us of studies on the relevant sources used by teachers-in-training, who frequently dismiss academic knowledge as hardly applicable. They leave aside the knowledge learnt in the university and begin to use other conceptions that are distributed in the culture and practice of their workplace, and that they consider more useful (Clará & Mauri, 2010). Knowledge largely based on previous personal educational experiences and beliefs about educational intervention, whose arguments are not supported by research or disciplinary knowledge, are, in fact, present and developed in teachers-in-training. Those arguments appear with a high frequency in a conversation that involves participating, describing, arguing, understanding. In fact, as we said before, it is the access to this experience that generates the voice of the students, legitimized both by the status they give to the evidence in their experience, and by the acceptance made by the tutor of this voice for actively contributing in dialogic reflection (Candela, 1999;Cubero & Ignacio, 2011). Thirdly, tutors' attitude of "My tutor doesn't say that": The legitimized voices in dialogic reflection on teaching practices Mercedes Cubero, Miguel Jesús Bascón, Rosario Cubero-Pérez Dialogic Pedagogy: An International Online Journal | http://dpj.pitt.edu DOI: 10.5195/dpj.2020.311 | Vol. 8 (2020) SA42 integration and reformulation of experience in the light of academic knowledge has not clearly managed to promote the incorporation of concepts from formalized knowledge by the students themselves, in a generalized way. That is, the tutors' efforts to connect pedagogical principles with personal experiences in the Practicum have not translated into student reflections in the same direction. This last data becomes a question that deserves a deeper study and an analysis of the extent to which the different sources of knowledge are treated as sources of understanding of a situation, or if, instead, in the discourse of the students some sources prevail over others. We believe that for the socialization of student in the teaching functions, which was discussed in the introduction of this study, it is necessary to work with future teachers to reflect on the legitimacy of their claims and arguments. The Practicum, by its very nature, plays a crucial role in such a training. The tutors who are responsible for generating these reflection processes must be aware of the sources of validation that they put into play in the classroom, and of those used by teachers in training to sustain their knowledge and beliefs. If these elements are decisive in the reflection processes for teacher training, it seems obliged to pay attention to the competencies and training that the Practicum tutors need. Supports This work is supported by the research project subsidized in 2013 by the Knowledge Generation Subprogram of the Ministry of Economy and Competitiveness of Spain. http://www.mineco.gob.es/ R&D Project "Excellence" entitled: "Aid for the construction of knowledge in the Practicum of teachers: Joint reflection to improve the theory-practice relationship" (EDU2013-44632-P).
2020-10-28T18:54:36.098Z
2020-10-07T00:00:00.000
{ "year": 2020, "sha1": "268baec3cd7b21d25e09afc7427184f131c78305", "oa_license": "CCBY", "oa_url": "http://dpj.pitt.edu/ojs/dpj1/article/download/311/221", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4f6c0ceedd2d63650c46118fdf1d3667cb32d804", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
89140814
pes2o/s2orc
v3-fos-license
The first data on bat ectoparasites (Acarina, Insecta) in the Baikal region and Yakutia (eastern Siberia) : This paper summarizes new data on ectoparasites from bats (Chiroptera: Vespertilionidae) from eastern Siberia (Russia). The existence of 14 bat ectoparasite species is confirmed for this territory, including eight species of gamasid mites (Gamasina: Spinturnicidae, Macronyssidae) and six species of insects belonging to two orders (Insecta: Diptera, Siphonaptera). The bed-bugs (Insecta: Heteroptera: Cimicidae) found are unde-fined. These findings include six species (one species of gamasid mites and five species of insects) not previously reported from eastern Siberia. New hosts are described for some ectoparasites. Currently, the biology of Eastern Palaearctic boreal bats is the subject of intensive investigation.Several cryptic bat species (Chiroptera: Vespertilionidae) have been described in the Siberian and Far East territories using modern molecular genetic methods: Myotis petax Hollister, 1912, Myotis sibiricus (Kastschenko, 1905), Plecotus ognevi (Kishida, 1927) and others (Benda and Tsytsulina 2000;Matveev et al. 2005;Spitzenberger et al. 2006).The ecology of these species -including their host-parasite relationships -still needs to be investigated.Current data on Siberian arthropods parasitizing bats are both extremely limited and require revision because the taxonomic status of their hosts has changed.In fact, we still have no information about bat ectoparasites over an approximately 2 million km 2 area (see Figure 1). Bats were collected in the boreal zone of the Baikal region and the Yakutia (Sakha Republic) (Figure 1) during the summers of 2014 and 2015.Captures were conducted during twilight and night-time hours using mist nets (Kunz and Kurta 1988) and Borisenko mobile traps (Borisenko 1999) in forest plots and open spaces (Figure 2).After examination, all animals were released or returned to their nursery roosts.A total of 52 individual bats belonging to five species of the family Vespertilionidae (Eastern Water Bat, Myotis petax; Siberian Bat, Myotis sibiricus; Ikonnikov's Bat Myotis ikonnikovi (Ognev, 1912); Northern Bat, Eptesicus nilssonii (Keyserling & Blasius, 1839); and Ognev's Longeared Bat, Plecotus ognevi) were examined. Ectoparasites were removed with a preparatory needle and forceps and fixed in 70% ethanol.Fleas and mites were mounted on permanent slides with Faure-Berlese's mounting medium (flea specimens were previously dipped into 5% KOH and washed in distilled water), bat flies were stored in 70% alcohol (Whitaker 1988).Ectoparasite species were identified using light microscopy according to several identification keys and articles (Medvedev 1985;Stanyukovich 1997;Lehr 1999). Investigated territory.The map was taken from Wikimedia Commons (2015).Bats were caught in the Eastern Siberian territory in the following localities: 1. Baikal region (sites 1-3; Figure 1): a. Buryatia Republic, Kaban district, the foothills of the In total, our material represents 637 specimens of 12 mites and insect species.Of these, 7 species of gamasid mites belong to the families Spinturnicidae (3) and Macronyssidae (4), and 5 species of insects belong to the orders Siphonaptera (fam.Ischnopsyllidae) (2) and Diptera (fam.Nycteribiidae) (3).An annotated species list of Eastern Siberian bat ectoparasites is presented below. This extremely poorly studied species is included in the Siberian-Far East faunal complex.Only two earlier findings are known: in the Primorie Territory (Ussurian Reserve) from an unidentified host (Stanyukovich 1995) and in the Tuva Republic from the Eastern Water Bat (Orlova et al. 2015b).The species has been confirmed on M. ikonnikovi and M. sibiricus for the first time.It is likely that S. bregetovae associated with bats of the genus Myotis Kaup, 1829. Taxonomic remark: Females of S. myoti are characterized by 90 dorsal opisthosomal setae and a pear-shaped sternal shield (Rudnick 1960;Stanyukovich 1997).Males of S. myoti have a sternogenital shield with four setal pairs and a small rounded tritosternum.(Koch, 1839) Material: Baikal region: 2 ♂ from P. ognevi.This is an oligoxenous species with a trans-Palaearctic range.Bats of the genus Plecotus Grey, 1821 are the principal hosts, but they can also be found on other vespertilionid bats (Rudnick 1960;Stanyukovich 1990;Stanyukovich 1997). Spinturnix plecotinus Taxonomic remark: Females have 14 or fewer setae at the end of the opisthosoma and lanceolate setae on the dorsal tips of tarsi II-IV.Males have two setae at the end of the opisthosoma, lanceolate setae on the dorsal tips of tarsi II-IV are present. Taxonomic remark: Species belonging to the genus Macronyssus differ from each other in the chaetotaxy of the dorsal shield and the pattern of the sternal glands (anterolateral sculpturing).The sternal glands of M. charusnurensis are with striae and cross-pieces in the oval zone (Figures 3 E and F).Males have 10 thick and long setal pairs on the opisthosoma with a hollow.(Ewing & Stover, 1915) Material: Baikal region: 3 ♀ (all with internal eggs) from P. ognevi; Yakutia: 5 ♂, 1 ♀ with internal egg, 13 N1 from M. sibiricus. Taxonomic remark: Females have a sternal plate with distinct anterolateral sculpturing consisting of 3 or 4 cells and indistinct striae.Males have thick setae Z5 (like claw) on the dorsal shield, and the opisthosomal setae on the unarmed integument are very long (100-120 μm).(Uchikawa, 1979) Material: Baikal region: 1 ♀ from M. petax.This is a rare and insufficiently studied Siberian-Far East species (Uchikawa 1979;Medvedev et al. 1991;Orlova et al. 2015b).Findings are scarce.Most likely, this is an oligoxenous species, parasitizing bats of the genus Myotis. Macronyssus hosonoi Taxonomic remark: Females of M. hosonoi are characterized by the unique form of the dorsal shield, which is without the narrow of the posterior margin inherent to all other Macronyssus species.The sternal shield of females is crescent-shaped without anterolateral sculpturing. Taxonomic remark: In males sternit V has a row of 8-10 short spines.Claspers are straight, basally curved; the aedeagus is long and narrow, strongly curved at the top and sharpened (Figure 3H).Sternit 6 in females is divided into two sclerites.The genital plate has an irregular form, with 6 or 7 setae.There is a large anal sclerite in females (Lehr 1999). Taxonomic remark: B. rybini belongs to the nattererigroup, but clearly differs from other species in the form of the female genital plate, which is horseshoe-shaped, and in the form of the male genitalia (the parameres are bifurcate, the upper lobe larger and is bearing three setae) (Hurka 1969). Taxonomic remark: Both males and females of P. mono ceros have a notable hornlike projection on their heads.The ventral genital plate of females is narrow and curved, the dorsal genital plate is triangular, with 5 or 6 setae.In males, sternit V has two large lateral projections; the parameres are clearly longer than the aedeagus (Lehr 1999). Taxonomic remark: M. trisellis has false combs of the thickened bristles in the dorsal area of abdominal terga I-III (Fig. 3G).Sternit VIII of males without bundles of long hair-like bristles on the inner side (Hopkins and Rothschild 1956).(Wagner, 1898) The only published records (Zhovtiy et al. 1962) are from the Baikal region (near the village of Kaylastuya, Borzinsky district, Chita region).These were recorded in September 1958 from the Asian Particoloured Bat, Vespertilio sinensis (Peters, 1880).This is a widely distributed trans-Palaearctic boreal species (Medvedev 1996).According to Medvedev (1989) and Rupp et al. (2004) the principal host of I. obscurus is the Particoloured Bat. Ischnopsyllus (Ischnopsyllus) obscurus Taxonomic remark: I. obscurus has eight combs on the thorax and abdomen.The comb of metanotum is composed of more than 40 spines.Bristles of sternum VIII are not conspicuously long (Hopkins and Rothschild 1956). Taxonomic remark: I. hexactenus has six combs on the thorax and abdomen.Males have large bristles at the apex of the movable process of the clasper; the apex of sternit VIII is broadened gradually (Hopkins and Rothschild 1956). Few data exist in the literature on bat ectoparasites of the Baikal region, and no data exist on bat ectoparasites of the Yakutia.A short communication from Zhovtiy et. al. (1962) is devoted to the discovery of gamasid mites, bat flies and bat fleas in the Transbaikalian forest-steppe.However, the taxonomic status of some bat hosts and their ectoparasites has been changed recently.In particular, it has been determined that the Whiskered Bat, Myotis mystacinus (Kuhl, 1817), does not inhabit the study area, and the species to which the bats formerly classified as M. mystacinus by Zhovtiy et al. (1962) belong is unknown.Moreover, the classifications of the gamasid mite species Spinturnix vespertilionis and Ichoronyssus flavus mentioned in the article are no longer valid at present, and the species status of representatives of the genus Cimex in the eastern Palaearctic requires clarification.Actually, according to Zhovtiy et. al. (1962), only two species of parasitic arthropods associated with bats have been unambiguously confirmed in the Baikal region: the gamasid mite Steatonyssus superans (Zemskaja, 1951) and the flea Ischnopsyllus obscurus (Wagner, 1898) (Table 1). The gamasid mite Ornithonyssus pipistrelli, bat flies Nyc teribia quasiocellata, Basilia rybini, and Penicillidia mono ceros, and the flea Myodopsylla trisellis were confirmed in eastern Siberia for the first time.These ectoparasite species belong to the Siberian-Far East and trans-Palaearctic (Holarctic) fauna complexes.It difficult discuss prior findings of Spinturnix bregetovae, S. myoti, S. plecotinus, Macronyssus charusnurensis, M. crosbyi, and M. hosonoi because it is impossible to determine what species were collected.These were identified by the authors as Spin turnix vespertilionis and Ichoronyssus flavus, which are currently invalid species. In general, the ectoparasite species composition of the studied area is largely similar to that of western Siberia and the Far East (Table 1), which confirms the conservatism of eastern Palaearctic boreal bat ectoparasite fauna throughout the huge territory from the Yenisey River to the Pacific coast. AcKNOwLEDgEMENTS We are grateful to V.V. Chepinoga, E.V. Sofronova for their help in the fieldwork and to M. Yu.Romanov for the picture processing.This study was supported through the project number 6.657.2014/k"Biotic ecosystem component, properties, resource potential and dynamics in a transforming environment of Western Siberia", Russian Program for Competitiveness Enhancement of Leading Russian Universities among Global Research and Education Centers (Project 5-100) and partially through the project number 0376-2014-0001, subject 51.1.4."The animal population of the arctic and continental Yakutia: the diversity of species, populations and communities (for example, lower reaches and deltas of the Lena river, tundra of Yana -Indigirka -Kolyma interfluve, basin of the Middle Lena and the Aldan rivers)." Figure 2 . Figure 2. Habitat for bats in Eastern Siberia.A) Mountain forests.B) River banks.C) Wood lakes.D) Remains of buildings.Photos by D.V. Kazakov (A-C) and E.S. Zakharov (D). Table 1 . The findings of specific ectoparasites of bats from Eastern Siberia. species Baikal region Yakutia Published records Transbaikalian forest steppe zone (Zhovtiy et al. 1962) Western Siberia (Orlova et al. 2015a, 2015b) Far East (Medvedev et al. 1991) Arachnida Acarina Gamasina Spinturnicidae * This may be the first record of this species in the studied territory.?Accurate interpretation of the literature data is impossible because of the changed taxonomic status of the parasite.
2019-04-01T13:14:10.486Z
2016-08-04T00:00:00.000
{ "year": 2016, "sha1": "c2825394c06d32d295cbdb0a6036ca45d72f12ff", "oa_license": "CCBY", "oa_url": "https://checklist.pensoft.net/article/19545/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c2825394c06d32d295cbdb0a6036ca45d72f12ff", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
16953194
pes2o/s2orc
v3-fos-license
Transient Abnormal Myelopoiesis and AML in Down Syndrome: an Update Children with constitutional trisomy 21 (Down syndrome (DS)) have a unique predisposition to develop myeloid leukaemia of Down syndrome (ML-DS). This disorder is preceded by a transient neonatal preleukaemic syndrome, transient abnormal myelopoiesis (TAM). TAM and ML-DS are caused by co-operation between trisomy 21, which itself perturbs fetal haematopoiesis and acquired mutations in the key haematopoietic transcription factor gene GATA1. These mutations are found in almost one third of DS neonates and are frequently clinically and haematologcially ‘silent’. While the majority of cases of TAM undergo spontaneous remission, ∼10 % will progress to ML-DS by acquiring transforming mutations in additional oncogenes. Recent advances in the unique biological, cytogenetic and molecular characteristics of TAM and ML-DS are reviewed here. Introduction Population studies show that children with Down syndrome due to constitutional trisomy 21 have a markedly increased risk of developing acute leukaemia compared with children without Down syndrome [1]. Both myeloid leukaemia, known as myeloid leukaemia of Down syndrome (ML-DS), and acute lymphoblastic leukaemia are increased by 150-and ∼30-fold, respectively [1,2]. ML-DS has a distinct natural history and clinical and biological features (reviewed in [3,4]). It virtually always develops before the age of 5 years, and the acute leukaemia is preceded by a clonal neonatal preleukaemic syndrome known as transient abnormal myelopoiesis (TAM) that is unique to Down syndrome [3,4]. TAM is characterised by increased circulating blast cells that harbour acquired N-terminal truncating mutations in the key haematopoietic transcription factor gene GATA1 [5][6][7][8][9][10]. Around 10-15 % of neonates with Down syndrome have a diagnosis of TAM with blasts >10 % and typical clinical fea-tures that require close monitoring in the neonatal period since the mortality rate may be up to 20 %. A further 10-15 % of neonates with Down syndrome have one or more acquired GATA1 mutations in association with a low number of circulating blast cells (<10 %) and have clinically and haematologically silent disease (silent TAM) [11••]. In the majority of cases of TAM and silent TAM, the GATA1 mutant clone goes into complete and permanent remission without the need for chemotherapy. However, 10-20 % of neonates with TAM and silent TAM subsequently develop ML-DS in the first 5 years of life when persistent GATA1 mutant cells acquire additional oncogenic mutations, most often in cohesin or epigenetic regulator genes [12••, 13]. This review article discusses the recent clinical and biological advances in TAM and ML-DS and how these may impact on clinical management. Cellular and Molecular Pathogenesis of TAM and ML-DS The cellular and molecular events involved in initiation and evolution of TAM and ML-DS can best be understood as a three-step model which requires the presence within a fetal liver-derived haematopoietic stem or progenitor cell of (i) trisomy 21, (ii) an acquired GATA1 mutation, and (iii) at least one additional oncogenic mutation (Fig. 1). (i) Perturbation of fetal haematopoiesis by trisomy 21 The initial event in trisomy 21-associated preleukaemic and leukaemic conditions is the perturbation of fetal haematopoiesis by trisomy 21 itself. It is known that by late in the first trimester of fetal life, haematopoiesis in the liver is abnormal in fetuses with trisomy 21 and that these changes precede the acquisition of GATA1 mutations [14, 15••]. Specifically, trisomy 21 causes an increase in the numbers of megakaryocyte-erythroid progenitors (MEP) and an increase in the size and characteristics of the immunophenotypic haematopoietic stem cell (HSC) compartment [15••]. HSC and multipotent myeloid progenitors in trisomy 21 fetal liver proliferate more in vitro compared with normal fetal liver at the same stage of development and have increased erythroid-megakaryocyte output and gene expression [15••]. Despite the increase in megakaryocytes (MK) in trisomy 21 fetal liver, MK differentiation is impaired and platelet counts are reduced both in fetal blood and in neonates with Down syndrome suggesting that trisomy 21 itself causes dysmegakaryopoiesis [11••, 14, 15••]. The molecular basis for these dramatic changes in fetal erythro-megakaryopoiesis is not yet clear. Some, but not all, of the features can be recapitulated either in elegant animal models [16][17][18][19][20] or in studies in human embryonic stem cells (ESC) and induced pluripotent stem cells (iPSC) [21,22]. Together, these studies have implicated increased expression of various genes on chromosome 21, in particular ERG and DYRK1a, as important mediators of the abnormal megakaryopoiesis, although this does not seem to be sufficient to cause leukaemia in trisomic or disomic mouse models even when co-expressed with an N-terminally truncated GATA1 gene. Interestingly, recent data using a panel of iPSC lines, suggest that trisomy of RUNX1, ETS2, and ERG might be sufficient, in combination with mutant GATA1, to explain many of the haematopoietic abnormalities seen in primary human fetal liver and TAM cells. Despite these interesting findings, it is now clear that trisomy 21 causes genome-wide changes in gene expression directly or indirectly affecting multiple genes on most chromosomes [23]. (ii) N-terminal truncating GATA1 mutations in TAM and ML-DS The link between acquired mutations in the GATA1 gene and ML-DS was first identified more than 12 years ago in John Crispino's lab [5] and rapidly followed by studies from a number of groups confirming the link with trisomy 21 as well as showing the same N-terminal truncating mutations in TAM [6][7][8][9][10]. The GATA1 mutations that can be detected in all cases disappear when TAM (or ML-DS) enter remission indicating that these are acquired events [9, 12••]. Application of highly sensitive next-generation sequencing (NGS)-based methodology has recently shown that GATA1 mutations are present in all cases of TAM or ML-DS and that they are present in 25-30 % of all neonates with Down syndrome [11••]. This means that GATA1 mutations are necessary for the development of TAM/ML-DS, that they are acquired prior to birth in fetal cells and that they occur at an astonishingly high frequency. It seems likely that acquisition of such mutations confers a selective growth advantage to these cells during fetal life. The presence of multiple GATA1 mutant clones in up to 25 % of neonates with Down syndrome is consistent with this [9, 11••]; however, the reason for their high frequency in Down syndrome remains unknown (these mutations are not found in normal, disomic cord blood) [9, 11••]. The vast majority of acquired GATA1 mutations (∼97 %) are found in exon 2 and the remainder in exon 3.1 of the GATA1 gene, including insertions, deletions and point mutations [10]. These mutations lead to expression of a truncated GATA1s protein [5,6] and the type of GATA1 mutation does not predict which patients with TAM will later progress to ML-DS [10]. Since the GATA1 gene is on the X chromosome, haematopoietic cells harbouring GATA1 mutations express only GATA1s and no longer have the ability to produce the full-length GATA1 protein [5]. The main physiological role of GATA1 is as a regulator of normal megakaryocyte and erythroid differentiation [24]. How GATA1 transforms trisomy 21 fetal haematopoietic cells is unclear. Forced expression of GATA1s in fetal liver haematopoietic progenitors from GATA1 wild-type mice causes marked expansion of megakaryoblastic progenitors, supporting a gain of function mechanism [25,26] and interestingly, Banno et al. using a trisomy 21 iPSC model also found that an increased level of expression of GATA1s might be responsible for the aberrant megakaryopoiesis they observed [27]. However, whether this is the main mechanism in trisomy 21 human cells and how GATA1s might transform trisomy 21 fetal haematopoietic cells remains an interesting question. (iii) Mutational landscape of ML-DS The presence of an N-terminal truncating mutation in GATA1 is necessary, but insufficient, for development of ML-DS. Recent whole genome and whole exome sequencing studies of ML-DS provide insight into the additional genetic events which co-operate with GATA1 mutations and trisomy 21 to further transform haematopoietic cells from a usually transient preleukaemic syndrome (TAM) to an acute leukaemia (ML-DS), which is inexorably fatal unless eradicated with chemotherapy [12••, 13]. These show a high frequency of mutations (∼50 %) in all the key cohesin component genes (RAD21, STAG2, SMC3 and SMC1A), as well as in CTCF (∼20 %) and in epigenetic regulators such as EZH2 and KANSL1 (45 %) [12••]. These genes encode proteins important for transcription regulation and long-range interactions that may be particularly vulnerable to disruption in trisomic cells. A smaller proportion of patients had mutation in RAS pathway genes (NRAS, KRAS, CBL, PTPN11 and NF1) [12••, 13] that are also seen at high frequency in other childhood leukaemias, such as juvenile myelomonocytic leukaemia [28]. (iv) Role of the haematopoietic microenvironment Although the data from primary human tissues as well as ESC and iPSC, indicate that trisomy 21 causes cell intrinsic changes in fetal haematopoietic s t e m a n d p r o g e n i t o r c e l l s , t h e f e t a l l i v e r haematopoietic microenvironment may also contribute both to these changes and to expansion and/or maintenance of the mutant GATA1 clone in TAM. Indeed, the natural history and clinical features of TAM clearly show this to be a fetal liver disease (see below). The nature of the growth factors that might mediate abnormal fetal haematopoiesis in TAM is unclear, although differences in the expression or responsiveness to the developmentally regulated IGF signalling pathway remain an attractive candidate [29]. TAM has a very variable clinical presentation: at one end of the spectrum, it may be detected as an incidental finding on review of a blood film in an otherwise well baby (10-25 % neonates) and at the other end of the spectrum, neonates with TAM may be very sick with disseminated leukaemic infiltration (10-20 % of neonates) presenting with massive hepatosplenomegaly, effusions, co-agulopathy and multiorgan failure [30-32, 33••]. The majority of neonates with clinical TAM (i.e. blasts >10 %) will have one or more of the well recognised clinical features of TAM which are summarised in Table 1. Amongst these features, hepatomegaly, splenomegaly, pericardial/pleural effusions and skin rash are seen more frequently in neonates with TAM compared with neonates without any GATA1 mutations. Jaundice, on the other hand, is common in neonates with Down syndrome with or without TAM [11••, 30-32]. Importantly, however, as no single clinical feature is specific for TAM, it is essential to review the blood film of all neonates with Down syndrome to avoid missing cases of TAM and to assess the significance of the clinical features shown in Table 1 [11••]. This is also important in the setting of delayed onset or prolonged hyperbilirubinaemia in neonates with Down syndrome as this may be the presenting feature of progressive TAM-associated liver fibrosis that may be fatal. Although the majority of cases of TAM present within the first few days of life, TAM may also present in fetal life either with hydrops fetalis or with features similar to those presenting postnatally [34•]. TAM: Laboratory Features TAM causes several haematological abnormalities. Characteristically, the main features are leucocytosis and increased peripheral blood blasts. Leucocytosis is present in 30-50 % of cases of TAM and typically includes increased neutrophils, myelocytes, monocytes and basophils [11••, 31, 33••]. The platelet count may be elevated, normal or reduced, and thrombocytopenia is not more common in neonates with TAM than in DS neonates without TAM [11••]. Similarly, although the median haemoglobin is lower in neonates with TAM compared with neonates with Down syndrome without TAM, anaemia is uncommon [11••, 31, 33••]. A deranged coagulation profile is reported to occur in 20-25 % of cases, although disseminated intravascular co-agulopathy (DIC) is usually confined to cases where there is severe liver dysfunction due to hepatic infiltration by blast cells [30-32, 33••]. Hepatic dysf u n c t i o n i s m a n i f e s t e d b y s e v e r e c o n j u g a t e d hyperbilirubinaemia and often, but not always, accompanied by elevated transaminases [30-32, 33••, 39]. The only one of these laboratory features that is specific for a diagnosis of TAM is a high number of circulating blast cells. However, one of the most challenging aspects of diagnosis of TAM has been establishing whether or not there is a threshold value for the percentage of blasts that is reliable for diagnosis in the absence of molecular confirmation by GATA1 mutation analysis. Blast count, morphology and immunophenotype Although TAM is characterised by increased peripheral blood blasts, it is now known that blast cells are seen on the blood film of almost all neonates (∼98 %) with Down syndrome and may account for 15-20 % of the circulating leucocytes in neonates shown to have no GATA1 mutations [11••]. There is no internationally agreed definition of a percentage blast threshold that constitutes 'increased peripheral blood blast cells'. In the Oxford Imperial Down Syndrome Cohort (OIDSC) Study, we addressed this question by prospectively classifying cases with blasts of >10 % and a GATA1 mutation in the first 14 days of life as TAM. In the preliminary analysis of the first 200 neonates with Down syndrome recruited into the study, 17 (8.5 %) fulfilled these criteria for a diagnosis and these criteria identified all neonates with clinical features of TAM, including all with severe disease [11••]. This analysis also showed that ∼25 % of neonates with blasts >10 % do not have a GATA1 mutation even when very sensitive NGS-based methods are used. On the other hand, 18/70 neonates in the OIDSC study (26 %) neonates with blasts ≤10 % (range 1-10 %) had a GATA1 mutation when NGS-based methods were used; these cases had no clinical and haematological features suggestive of TAM and were designated 'silent TAM'. Taken together these data indicate that an accurate diagnosis of TAM relies on both the presence of blasts and a GATA1 mutation and suggest that a blast threshold of >10 % will identify all neonates with Down syndrome with TAM who may require chemotherapy and close monitoring during the neonatal period. However, this blast threshold is not specific for TAM and is not sufficiently sensitive to identify the majority of neonates who have GATA1 mutations. Typically, the blast cells in TAM are described as megakaryoblastic with cytoplasmic blebbing and basophilic cytoplasm; however, in our experience the morphology of the blasts is highly variable. Similarly, the immunophenotype of the blast cells is highly variable; the characteristic pattern of co-expression of stem cell markers (CD34 and CD117), myeloid markers (CD33/CD13), platelet glycoproteins (CD36, CD42, CD61) together with CD56 and CD7 is heterogeneous both within and between cases [35][36][37][38]. At present, there is no distinguishing morphological and immunophenotypic profile that can accurately discriminate TAM from cases where there are no GATA1 mutations [11••]. Silent TAM As mentioned above, at least half of all Down syndrome neonates with GATA1 mutations have a peripheral blood blast percentage of 1-10 % and have no clinical features associated with TAM (Table 1) [11••]. This is an important group of neonates because the presence of the GATA1 mutation means that they are at risk of subsequently developing ML-DS if the mutant GATA1 clone persists [11••]. It is likely that the reason for the lack of clinical features is the small size of the mutant GATA1 clone at birth since the OIDSC study showed a strong correlation between the size of the mutant clone and the percentage of peripheral blood blasts [11••]. TAM and Silent TAM: GATA1 Mutation Analysis The recognition of silent TAM means that detection of GATA1 mutations for clinical diagnosis requires sensitive as well as specific and reliable methods. Current available methodologies for GATA1 mutation analysis are direct Sanger sequencing (sensitivity 10-30 %), dHPLC (sensitivity 2-10 %) and various methods of NGS (sensitivity 0.3-2 %) [9, 10, 11••, 12••]. Each method has technical limitations, advantages and disadvantages, but only NGS-based methods are sufficiently sensitive for initial diagnosis as neither direct Sanger sequencing nor dHPLC are able to reliably detect small GATA1 mutant clones (<10 %) that are of clinical significance [11••]. The value of monitoring mutant GATA1 clones after diagnosis is currently not clear; this would require an extremely sensitive method and an important limitation is that neonates with TAM may have more than one GATA1 mutant clone and that ML-DS may develop from either or both major and minor GATA1 clones present at birth [11••, 12••]. Natural History of TAM and Progression to ML-DS Most neonates with TAM (>80 %) undergo spontaneous resolution of both clinical and laboratory abnormalities within 3 months after birth with a 5-year overall survival of ∼80 % and event-free survival of ∼60 % [30-32, 33••]. Complete remission is often characterised first by normalisation of blood counts and disappearance of peripheral blasts followed by resolution of clinical symptoms such as hepatomegaly [39]. The overall mortality is reported to be ∼20 %, however, only half of the deaths are directly attributable to TAM usually due to hepatic failure secondary to fibrosis and blast cell infiltration [30-32, 33••]. Estimates of the risk of progression of TAM to ML-DS are mainly based on retrospective studies and suggest that 20-30 % of neonates with TAM will subsequently present with ML-DS [30][31][32]33••]. Since it is now known that the frequency of GATA1 mutations at birth is much higher than previously realised (25)(26)(27)(28)(29)(30) % of all neonates with Down syndrome) and since population-based estimates of the frequency of ML-DS indicate that ∼1.5 % of children with Down syndrome will develop this leukaemia before the age of 5 years [1], this suggests that the risk of progression is lower than these original estimates (around 5-10 %) given that silent TAM also has the potential to transform to ML-DS. In some cases, there is overt progression/evolution of TAM to ML-DS with persistent abnormal haematology and an indolent myelodysplastic syndrome; in other cases, there is a variable apparent remission before development of ML-DS [11••, 30, 31]. GATA1 mutations are detected in all cases of ML-DS [11••, 12••] and are therefore essential for progression to ML-DS. Factors, which reliably predict transformation of TAM to ML-DS, have not been identified yet. The type of GATA1 mutation does not seem to play a role [10]. Data on the size of the GATA1 mutant clone at birth as a predictor of later ML-DS are too preliminary at present. The only clinical factor shown in multivariate analysis to predict transformation of TAM to ML-DS is the presence of pleural effusion in the neonatal period [31]. TAM: Management Most neonates with TAM undergo spontaneous resolution and do not need treatment. However, neonates with progressive Thrombocytopenia (<100 × 10 9 /l) 50 50 50 Leucocytosis (>26 × 10 9 /l) life-threatening symptoms such as hydrops fetalis, extreme leucocytosis (WBC >100 × 10 9 /l), hepatopathy, DIC with bleeding, renal and/or cardiac failure may benefit from chemotherapy as the mortality rate may be up to 20 % [30-32, 33••, 39]. A summary of the outcome of treatment from these studies is shown in Table 2. As TAM blasts appear to be very sensitive to cytarabine [40] and early observational studies showed promising results with very low doses of cytarabine [41], currently used regimens are based on this approach. The Berlin-Frankfurt-Münster group recommended treatment with cytarabine (0.5-1.5 mg/kg for 3-12 days) for neonates with TAM and clinical impairment due to thrombocytopenia, signs of cholestasis or liver dysfunction or high white cell count (>50 × 10 9 /l) [31]. Out of 146 patients, 28 received treatment with cytarabine many of which had hepatic fibrosis and required intensive support. Survival in the treated and untreated groups was very similar (5-year overall survival 78 ± 8 vs. 85 ± 3 %, p = 0.44), suggesting that treatment might have been beneficial given that the treated neonates had much more severe disease [31]. The Children's Oncology Group identified 38 of 135 patients as having life-threatening symptoms and 24 received cytarabine, given as a continuous infusion at a dose of 3.33 mg kg − /day −1 for 7 days. The survival rate for the treatment group was disappointing (51 %) most likely reflecting both the severity of the disease and the high rate of haematological toxicity (96 % grade 3/4 myelosuppression) perhaps because of the higher dose and continuous infusion regimen [33••]. More recently, a preliminary report from Muramatsu et al. of a large study in neonates with TAM reported a significant improvement in 1 year survival when neonates with extreme leucocytosis (>100 × 10 9 /l) were treated with cytarabine [52]. However, there is no evidence at present that treatment with cytarabine has a significant impact on the likelihood of disease progression to ML-DS [31, 33••]. ML-DS: Clinical Features ML-DS is classified as a specific subtype of AML in the World Health Organisation (WHO) classification [42]. This leukaemia is unique to Down syndrome and has several distinct features. Firstly, ML-DS presents at a median age of 1-1.8 years and is rare after the age of 4 years [43,44]. Secondly, most cases of ML-DS have a clinical history consistent with preceding TAM in the neonatal period, and for those that have no such history, the most likely reason is the absence of appropriate diagnostic tests at birth [11••]. Consistent with this, GATA1 mutations are found on neonatal bloodspots from neonates with ML-DS even in the absence of an antecedent history of TAM [9]. ML-DS often shows an indolent presentation with myelodysplasia and progressive pancytopenia, in particular thrombocytopenia and leucopenia, with a low percentage of circulating blasts for many months before the development of ML-DS [43,45,46]. Since the circulating blast count is often low in ML-DS and the predominant haematological picture may be of slowly progressive pancytopenia, a bone marrow aspirate is usually essential for the diagnosis of ML-DS. However, this is often associated with a 'dry tap' secondary to marked bone marrow fibrosis and a bone marrow trephine may be necessary to confirm ML-DS-it is not clear that the conventional bone marrow blast threshold used in acute myeloid leukaemia is of value in ML-DS in view of the natural history of the condition and the difficulty in obtaining a representative sample. ML-DS: Laboratory Features Almost all patients with ML-DS have thrombocytopenia and most also have anaemia and neutropenia. In contrast to TAM, the leucocyte count is usually low. However, the blast cells are similar to those in TAM, with a typical megakaryoblastic morphology [43,47] and co-expression of stem/progenitor cell markers (CD34, CD117), myeloid (CD33), megakaryocytic (CD42b and CD41) and erythroid markers (CD36 and glycophorin A) as well as CD7 [35,37,47]. ML-DS has a distinct cytogenetic profile compared with sporadic AML in children without Down syndrome in that the favourable cytogenetic changes such as AML-ETO t(8;21), PML-RARA t(15;17), MLL t(9;11) and CBFB MYH11 inv [16] nor the acute megakaryoblastic leukaemia-associated translocations RBM15-MKL1 t(1;22) and t(1;3) occur in ML-DS [48,49]. Instead, several karyotypic abnormalities are more frequent in ML-DS than in children without Down syndrome, including [53]. The results of a small retrospective study in 15 patients suggest that transplant-related toxicity might be reduced by using reduced intensity conditioning regimens (n = 5; 80 % event-free survival) compared with standard myeloabalative regimens (n = 10; 10 % event-free survival) [54]; however, these results remain to be confirmed in larger studies. Conclusion Children with Down syndrome have a markedly increased risk (∼150-fold) of developing acute myeloid leukaemia, known as ML-DS) compared with children without Down syndrome. ML-DS is preceded by a clonal neonatal preleukaemic disorder, known as TAM, which maybe clinically overt or silent. TAM and ML-DS have unique biological, cytogenetic and molecular characteristics. There are at least three distinct steps in the pathogenesis of ML-DS. First, trisomy 21 perturbs fetal haematopoiesis, providing the ideal cellular context for the second step: transformation of these fetal haematopoietic cells by acquired N-terminal truncating mutations in the GATA1 gene to produce the clinical syndrome TAM. While the majority of cases of TAM resolve without sequlae as the GATA1 mutation is lost, ∼10 % of children harbour residual GATA1-mutant cells which then, in the third step, acquire transforming mutations in additional oncogenes leading to ML-DS. Uncovering the mechanisms which underlie these events remains an exciting challenge and is at last beginning to offer real prospects of translation of these finding into useful therapeutic advances for children with Down syndrome so that we can improve treatment and outcome by investigating new agents that could potentially improve their leukaemia-free survival without additional toxicity. Compliance with Ethical Standards Conflict of Interests Neha Bhatnagar, Laure Nizery, Oliver Tunstall, Paresh Vyas and Irene Roberts each declare no potential conflicts of interest. Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors. Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2018-04-03T03:00:07.592Z
2016-08-10T00:00:00.000
{ "year": 2016, "sha1": "183089c5bf2a6aaa22d75eea60137b3a01ec1d01", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11899-016-0338-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "183089c5bf2a6aaa22d75eea60137b3a01ec1d01", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
149755206
pes2o/s2orc
v3-fos-license
Brief-ANT as a tool for measuring the influence of roadside advertisements content on the drivers’ attentional processes The question of roadside advertisement’s influence on road safety is complex and multi-faceted. The list of advertisements characteristics which may play significant role for road safety includes: the size, colours, shape, luminance, contrast, localization, and many more. One of the aspects is advertisement’s content. Advertising industry uses emotional and cognitive mechanisms which are likely to engage the addressees’ attention and therefore make the brand/product more salient for their minds. As far as such a strategy might be effective for advertisers, it may be dangerous for road safety, when used in roadside advertisement. Cognitive, especially attentional, resources play key role in vehicle driving, which requires constant maintenance of situational awareness. Attention distraction, both visual and cognitive, is a proven safety-decreasing factor in vehicle driving. A method for measuring the influence of different advertisement content on attentional resources management - a short, version of the ANT, Brief-ANT, was developed. The results of a nationwide study conducted, revealed that reaction time in Brief-ANT differed significantly depending on the type of content used as the fixation cue, which leads us to a conclusion that Brief-ANT might be a good measure of the content’s influence on attentional resources management. Introduction The question of roadside advertisement's influence on road safety is complex and multifaceted. The list of advertisements characteristics which may play significant role for road safety includes: the size, colours, shape, luminance, contrast, localization, and many more. One of the aspects is advertisement's content. The advertisement's content itself is also a complex construct. The particular idea might be expressed both verbally and nonverbally (i.e., via the use of colours, shapes, and pictograms). Whichever technique is used, the aim of advertisement is to catch addressee's attention, and subsequently, engage him in processing its content in order to finally increase the product's or brand's salience in his mind [1]. To achieve this goal, advertisers use various mechanisms. Although the list below might be incomplete, it covers most of the mechanisms commonly observed in roadside advertising in Poland, moreover, many of them are considered as dangerous when used in roadside advertising. -Slogans -text-based advertising, which might be either direct or incorporate more ambiguous or inexplicit literary figures (e.g., metaphors, ambiguity) [2]; -Sexual appeals [3]; -Negative emotions (e.g., fear, sadness, distaste) [4][5][6]; -Positive emotions (e.g., pleasure, drollery, tenderness) [7]; -Teasers -advertising in a form of a planned campaign consisting of two expositions: the first one containing some obscurity, and the second -suspended in time -dissolving the obscurity [8]; -Colour intensity and contrast use; -Picture complexity, i.e., the number of pictograms used in advertising [9]; -Unusual form. As mentioned above, advertising is often aimed at catching addressee's attention, which in the context of roadside advertisement might mean: driver's attention. Meanwhile, cognitive, especially attentional, resources play key role in vehicle driving, which requires constant maintenance of situational awareness. Attention distraction, both visual and cognitive, is a proven safety-decreasing factor in vehicle driving [10][11][12]. The content of roadside billboards can influence the driver's attention not only during the exposition (i.e., while looking at the advertisement) but also, when his eyes are back on the road [13]. In such a situation, the driver is seemingly looking on the road, but because he is still processing the information seen before, his attentional resources are depleted [14]. Therefore, it seems useful to verify the effect of different types of roadside advertising content on driving safety in terms of attentional processing. Posner defines attention as a system, which directs information flow and controls (i.e., prioritizes) information validity [13]. According to his theory, attentional functioning is based on three systems responsible for three functions: altering, orienting, and executive control. Attention Networks Test is a behavioural method designed to measure the activity of the three systems [15]. The task consists of a series of trials, in which the participant's task is to indicate the direction of an arrow used as a so called signal. The arrow might appear in the context of other arrows directed in the same or different direction than the signal (congruent vs. incongruent), it might also appear above or below the fixation point. In the test, reaction time serves as one of the main indicators of attention efficiency. As the purpose of the present project was not to measure the activity of the three systems, but to verify the influence of different types of advertising contents on the processing efficiency of attention, we decided on developing a simple laboratory method for verifying the effect. The method is based on one of the versions of ANT task developed by CRSD (Centre for Research on Safe Driving) [16]. The development of the method, called Brief-ANT, and all implemented modifications are described below. Participants and procedure A sample of 800 Polish drivers were involved in the study, 293 female and 507 male. After corrections for missing data, the sample consisted of 684 drivers: 253 female and 431 male. 15% (103) of the participants were between 18 and 14 years old, 166 were aged 25-35, 163 fell into the group between 35 and 44 years old, 124 were between 45 and 54, and 128 of the participants were over 54 years old. 66% of the respondents declared using a car daily, and 21% -at least once a week (the rest of the group declared using a car less frequently). All types of habitats were represented in the sample: villages (42%), town up to 100,000 occupants (30%), towns between 100,000 and 500,000 occupants (18%), and cities over 500,000 occupants (10%). The study was a nationwide CAWI research in which the participants filled in an online survey consisting of demographic questions and the Brief-ANT. All participants were voluntary respondents recruited by the research panel and were rewarded for handing in the survey with credit points that they could subsequently exchange for rewards. The survey was secured from being reached twice by the same participant. Materials and methods Each participant started with a survey including demographic and driving experience related questions. Next, they filled in a Brief-ANT. In Brief-ANT, the fixation cue is either an advertisement (450x300px) or a cross (like in the original; cf. Fig. 1). Presentation time was set constant: 2000 ms -the assumed safetycritical time for visual attention distraction in many transportation research. Then, the cue followed, but the re-fixation time was shortened to 100ms. This was done to reduce the time between exposition to the advertisement and reaction. Finally, the target was presented for 1500ms. To sum up, the Brief-ANT trial consisted of the following sequence: fixation (2000ms) -> cue (100ms) -> fixation (100ms) -> target (1500ms)-> mid-trial interval (3000ms). In order to test the method in online version, a pilot study (N=80) was conducted. In the present study, different types of advertisement's content referring to the mechanisms described in the Introduction were used. The stimuli consisted of four advertisements representing each mechanism, two with high intensity of the cue and two with low intensity of the cue. Consequently, we used: -4 sexual advertisements (2 strong -showing naked bodies and 2 weak -showing gentle erotic cues); -4 emotionally negative advertisements (2 strong -e.g., showing blood and victims, and 2 weak -showing indirect cues about negative events); -4 emotionally positive advertisements (2 -using humour and 2 -inducing general positive affect); -4 picture-based advertisements (2 -using multiple pictograms and 2 -simplex); -4 text-based advertisements (2 -using sort slogans and 2 -conveying the same message in more words); -4 colour-based advertisements (2 -with intense colours and high contrast, and 2 -with mild colours and low contrast); -4 teaser advertisements (2 -including obscurity and 2 -resolving it); -4 form-based advertisements (2 -in typical form of a rectangular billboard and 2with exactly the same content shown in a unusual form). Advertisements within the 'weak-strong' pair used the same stimuli (e.g., model, object) and varied only in the intensity of the cue. The choice of advertisements used in the study was based on the results of pilot studies (N=30). For each mechanism, ten pairs of advertisements varying with regard to the intensity of the key cue were prepared. The proper study uses pairs for which the pilot study revealed differences in the level of the key mechanism (i.e., sexuality, negative emotions, etc.), but not with regard to other characteristics (e.g., attractiveness). The advertisements were displayed instead of the cue in original ANT version. The sequence of screens in the original version of ANT and in Brief-ANT is illustrated in figure 2. Results Reaction times with no advertisement-fixation cue were excluded from the following analyses, as they were in general shorter than in the context of exposition to advertisement, which is in line with observations from other ANT-based studies, showing the influence of arousal on reaction times [17,18]. Reaction times in the context of advertisements with various contents type were analysed with SPSS 23 software. The results of the GLM analysis revealed, that reaction times were different in the context of various content types F = 16.089; p < .001; η 2 = .023. Figure 3 illustrates the results of the analysis. Fig. 3. Reaction times in the context of various advertisment types. A series of contrast analyses was performed to reveal which reaction times differ significantly from one another. The results of contrast analyses are presented in table 1. In order to verify whether reaction times in the context of advertisements using the same mechanisms but of different intensity, a series of pairwise T-tests was performed. Results of the comparisons are illustrated in figure 4. Conclusion The results of the study reveal that Brief-ANT differentiates the processing efficiency of attention in the context of various cues. Reaction times observed when advertisements based on different mechanisms were used as a cue differed significantly. The longest reaction time was observed after the exposition of text-based advertisements. Moreover, exposition of longer slogans elicited longer reaction times than did shorter ones. The next longest reaction times were observed after the exposition to emotioninducing advertisements: negative and positive successively. The difference between those two mechanisms -induced reaction times was nonsignificant, while the reaction times observed after their exposition were significantly longer than after the exposition of any other type of cue (textual advertisements excluded) in case of emotionally negative cues (the differences did not reach the level of statistical significance in case of emotionally positive stimuli). What is more, the intensity of the emotional cue (strongly negative vs. mildly negative, humorous vs. generally positive) changed the reaction times observed, i.e., they were longer in case of strongly negative and humorous cues than in the other contexts. Although negative emotional stimuli caused reaction times which differed more significantly from those caused by other mechanisms than did those caused by positive emotional stimuli, they both seem to have similar effect. This might mean that it's the arousal that influences the processing effectiveness of attention, not its' valence. The fourth longest reaction time was observed in the context of sexually appealing advertisements. Also, reaction times after the exposition of more intense erotic cues were longer than when mildly erotic cues were presented. Such a pattern of results, seems consistent with the thesis expressed in the former paragraph, therefore forming a group of three arousing mechanisms, which may severely influence attentional functioning. The above considerations seem consistent with observations from other, elsewhere published, studies conducted with the use of driving simulator. We observed that longer slogans resulted in increased variation in maintaining the distance to the vehicle in front in the Three Vehicle Platoon Task [19]. Similar effects were observed during the exposition of negative emotion -based advertisements -strongly negative stimuli in roadside advertisements influenced distance maintaining more, than did weak negative stimuli [20]. Moreover, longer advertising slogans used in roadside advertising resulted in longer reaction times in Lane Changing Task, than did shorter ones [21], i.e., when exposed to billboards with longer slogans, it took drivers longer time to change driving lane in reaction to a traffic sign. Reaction times after exposition to visually more complex stimuli, were also longer than those following simplex stimuli. Such an effect finds confirmation in research on the influence of roadside advertising on driving effectiveness [9]. No differences were observed for two mechanisms: colour intensity and unusual form. As for the latter, the lack of the effect might be due to the computer-based method, which considerably decreases the visibility of form variations, reducing them all to a two dimensional photo displayed on the screen. The former mechanism, possibly needs more in-depth investigation, however, the lack of differences might be a result of the fact, that participants had no other possibility than observe the advertisement, when it was displayed, while colour intensity possibly plays its' role in attention catching, not influencing the attentional processing as such. Interestingly, the fact that reaction times in Brief-ANT vary depending on the type of cue, seems to confirm the assumption, that using attention-catching mechanisms in advertising influences attentional processes not only during the exposition but also afterwards. The results of the study lead us to a conclusion that Brief-ANT might be a good measure of the content's influence on attentional resources management and can be useful in the research on the safety-critical characteristics of advertising contents in the context of driving safety.
2019-05-12T14:24:16.381Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "247b8d4146d8258d1fc0811e234317b3f107e6f6", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/90/matecconf_gambit2018_04009.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6a71b7c21ae7eabd81501218192d99a9c3014233", "s2fieldsofstudy": [ "Psychology", "Business", "Engineering" ], "extfieldsofstudy": [] }
229165027
pes2o/s2orc
v3-fos-license
DEVELOPING E-AUTHENTICATION FOR E-ASSESSMENT – DIVERSITY OF STUDENTS TESTING THE SYSTEM IN HIGHER EDUCATION E-authentication is one of the key topics in the field of online education and e-assessment. This study was aimed at investigating the user experiences of students with special educational needs and disabilities (SEND) while developing the accessible e-authentication system for higher education institutions. Altogether, 15 students tested the system (including instruments for face recognition, voice recognition, keystroke dynamics, text style analysis and anti-plagiarism), developed as part of the TeSLA project. Students also completed pre-questionnaires and post-questionnaires and attended individual interviews. The findings reveal positive expectations and experiences of e-authentication. Students believed that the e-authentication system increased trust and, thus, diversified their possibilities for studying online. Students found some challenges and emphasized that the e-authentication system should be reliable and easy to use. The possibility to use different kinds of instruments was perceived as an important feature. Students’ willingness to use these instruments and share their personal data for e-authentication varied due to their disabilities or individual preferences. The results suggest that students should have options for what kind of e-authentication they use. Abstract Introduction Integration of information and communication technologies (ICT) in academic studies is a reality at all higher education institutions (Heiman et al., 2017).Online programmes and courses have become a customary part of higher education practices.Some universities are fully online, and traditional universities are increasingly offering blended and online education courses.E-assessment systems follow the same development. Online education provides new options for students with Special Educational Needs and Disabilities (SEND) to participate, and thus improves their access to higher education (Coleman & Berge, 2018).European Disability Strategy 2010-2020 highlights the goal of promoting inclusive education and lifelong learning for people with disabilities (European Commission, 2010).It is known that diversity of students as well as the number of higher education students with SEND is growing (Snyder et al., 2018).In Finland, the results of a national survey of higher education students revealed that 8.2% of students have a learning difficulty, illness or disability that affects their learning (Kunttu et al., 2017).While looking at special educational needs of students in larger context, the number is even higher.According to Eurostudent survey, 28% of Finnish higher education students had chronic illness, mental health problems, physical restriction, sensory disability, learning difficulty (ADHD, dyslexia), other long-term health problem, physical restriction or disability (Potila et al., 2017).70% of those students informed that this hampered their studying. It is also likely that the proportion of students with disabilities in higher education is undervalued and the number is larger than reported (Grimes et al, 2017).For various reasons, students do not always want to disclose disabilities, even if doing so would enable them to receive better support (Grimes et al., 2017;Kent et al., 2018).In Verdinelli and Kutner's (2016) study, students experienced discrimination due to their disability in the traditional education environment, but in the online environment, they did not endure stigmatization or stereotypical treatment.In addition, participants experienced a greater sense of control and academic efficacy when studying online (Verdinelli & Kutner, 2016).Thus, teachers instruct diverse students enrolled in higher education courses without knowing their special educational needs (Lombardi et al., 2015).In the context of online education, recognizing students' special educational needs can be even more difficult than in traditional campus education.Therefore, accessibility must be a self-evident part of the online course design, not something to address after a student has disclosed a disability (Betts et al., 2013;Ladonlahti et al., 2020). Accessibility means that "people with disabilities have access, on an equal basis with others, to the physical environment, transportation, information and communications technologies and systems (ICT) and other facilities and services" (European Commission 2010; p.5). Directive (EU) 2016/2102 is aimed at ensuring that all websites and mobile applications of public sector bodies are accessible, so that everyone can access and understand the meaning of the content (EUR-lex, 2019). Inclusive and accessible online learning requires an approach that addresses both technology and pedagogy (Kent et al., 2018).Access to education should always mean that, for example, course contents, learning activities and all services that students need are accessible (Betts et al. 2013).If online education is created in an accessible way, this will open up more possibilities for all students to study (see Macy et al., 2018). The need for student authentication and authorship verification in online education Although online education offers many benefits, like new ways of representing knowledge and increased flexibility of studying (Timmis et al., 2016), it also raises new issues to consider.Amigud (2013) reminds us that new technologies may facilitate cheating.In addition, in Mellar et al.'s (2018) study, teachers expected cheating to become a greater problem with the increased use of eassessment.Many higher education instructors saw an effective e-authentication system as a prerequisite for the larger use of e-assessment (Mellar et al., 2018).Researchers and higher education staff have acknowledged the need for reliable ways to confirm students' identities.There is a need to develop systems for student authentication and authorship verification.At the same time, it is important to ensure that the systems are accessible for a diversity of students as well as those using assistive technology (e.g.Amigud, 2013;Mellar et al., 2018). Username and password identification are often used to control access to the online learning environment, but this is an inadequate approach to authentication (Amigud, 2013).Some systems also deploy biometric technologies (Amigud, 2013; Lee-Post & Hapke, 2017).Lee-Post and Hapke (2017) argue that biometric-based authentication solutions require the use of special devices.In addition, concerns have been raised about data security and privacy issues in dealing with sensitive user data (Lee-Post & Hapke, 2017).However, universities seem to enjoy the status of trustworthy operators.Levy et al. (2011) indicate that students taking online courses are more willing to share their biometric data with the university than with a private vendor offering the same service.Guillén-Gámez et al. (2015) recommend the gradual introduction of biometric authentication systems for students online.Students who used biometric authentication in their study were more favourable to, and comfortable with, it and appreciated the implementation of this technology compared to those students who had not tested the software.In Okada, Noguera, et al.'s (2019) study, teaching staff believed that the e-authentication system would increase students' awareness of cheating and plagiarism.However, they also believed that it would not be possible to prevent fraud totally.In Okada, Whitelock, et al.'s (2019) study, e-authentication for online assessments was received rather positively by students.E-authentication should offer more possibilities for assessment, for example with the flexibility of time or place to study.Nevertheless, students with disabilities, on average, had various concerns and relatively negative attitudes towards e-authentication due to their lack of confidence and concerns about their limitations (Okada, Whitelock, et al., 2019).It appears that students with SEND do not find e-authentication completely suitable for them and their needs are not sufficiently recognized.This underlines the importance of SEND students' role as partners when developing e-authentication system for e-assessment. During the project, five different authentication and authorship verification instruments were integrated into the system (Table 1): face recognition, voice recognition and keystroke dynamics for biometric authentication and plagiarism detection and forensic analysis for authorship verification (Knuth, 2016). Forensic analysis Instrument verifies that a document has been written by a specific author.There must be a set of text files written by the author. All instruments, except plagiarism, required at least two samples from each student to allow for comparing the samples.TeSLA system has been designed for use with commonly available devices like ordinary laptop; therefore, students do not have to supply any special devices (see Mellar et al., 2018;TeSLA, 2019).TeSLA system does not require any special software or hardware.It is integrated to the learning management system.Students need a microphone for voice recognition, a web camera for face recognition and a keyboard for keystroke dynamics.The system is also easy to use.The user does not need any specific education to be able to use it.All the instructions are included in the system. Research questions and methods This study was carried out to investigate user experiences of students with SEND, who tested an eauthentication system (face recognition, voice recognition, keystroke dynamics and forensic analysis), and their perceptions of e-authentication in higher education studies.Plagiarism detection was not part of this study.The research questions were as follows: • What kind of user experiences do students with SEND have of the e-authentication system?• What are the benefits and challenges of the e-authentication system according to higher education students with SEND? Participants Participants of this study came only from one TeSLA pilot university.Altogether, 15 university students (eleven female, four male) participated in the study.They ranged in age from 20 to 48 years, with most of them having been born in the 1990s.They were undergraduate students in a Finnish university from the faculties of humanities and social sciences, education and psychology, mathematics and science.All participants had special educational needs.The classification of students with special educational needs and disabilities for the research were built together with other pilot universities.According to the information that participants shared, nine of them had reading and writing difficulties or some other specific learning difficulty, two of them were partially sighted, one student had chronic illness, one student had problems with attention and concentration, and two students were deaf and used sign language. Students participated voluntarily.Each student received 10 vouchers for lunch at the student canteen after completing the testing and the interview.In addition, they participated in a lottery to win one of two iPads with all other students who took part in the TeSLA study. Data collection University's accessibility planning coordinator sent an invitation email to all students with SEND at the university.Those students who decided to volunteer sent an email to the researchers and provided their contact information.After that, researches sent more information (e.g.video recording) explaining about the TeSLA project and about the study to the volunteered students. Students took part in structured, face-to-face tests of the system.The testing was conducted between December 2017 and May 2018 and occurred on the university premises.At the time of data collection, the TeSLA system was still in development.Each test scenario included questionnaires and a face-to-face interview at the end of the session.Before the testing and interview, all students provided informed consent, which presented data protection and privacy information about TeSLA project and the present study.Digital consent form included detailed information about what kind of authentication and authorship verification instruments students are asked to use, what kind of data is gathered, where it will be stored, and how it will be used.Participants accepted the digital consent form by marking a cross.Researches also confirmed participants' agreement orally, and made sure that test situation can be videotaped, and data can be used for research.The researchers emphasized that students were free to drop out of the study at any time.Students were also able to ask the researchers about the study and pose questions about anything that was unclear.Students tested face recognition, voice recognition, keystroke dynamics and forensic analysis, and the whole test took two-to-three hours per student.The time spent for all these steps varied a lot, depending mainly on the participants' speed of reading, speed of typing, personal characteristics of speech and voice. All participants were asked to follow the same steps for the system test (Figure 1).Students logged in with their username and password and accessed the TeSLA Moodle environment.They accepted the consent form, which confirmed their participation in the TeSLA project and the present study.Subsequently, they completed the pre-questionnaire.Then students were asked to complete the enrolment activities to initialise (set a baseline for) the system.This involved typing 250 words, providing voice samples and videos of the face and sharing text documents.Next, students performed assessment tasks that involved typing answers to some simple questions, reading answers aloud, keeping the camera on and sharing text documents (Table 2).At the end, students completed a post-questionnaire about their experience of the TeSLA system.After completing these steps, students were interviewed.This study was mainly based on the interview data.The aim of the interviews was to obtain feedback on the TeSLA system and instruments and gain knowledge of the experiences and expectations of the e-authentication system.All participants were interviewed individually, but the number of researchers in the test situation varied.To build and ensure a common structure for the test situation, three researchers interviewed the first two students.The rest of the interviews were completed by two researchers, with two exceptions when only one researcher was available.The two students using sign language had an interpreter involved in their interviews.Interviews lasted from 7 minutes to 22 minutes.Interviews were conducted in Finnish; thus, all quotations in the Findings section were translated from Finnish to English by the authors.Each student was assigned an individual number and abbreviation, such as S1.Three dots (…) denote that a quotation was condensed. Furthermore, minor parts of the participants' (N = 15) pre-questionnaire and post-questionnaire data were used.One multiple choice question from the pre-questionnaire and three multiple choice questions from the post-questionnaire data were included in this study to add some extra information about the themes that emerged from the interviews.Students were asked in both questionnaires which types of personal data they were willing to share in the e-authentication process.In the post-questionnaire, students were asked about the main advantages and disadvantages of e-assessment for students.Students also had the opportunity to comment freely on the advantages and disadvantages of e-assessment. Analysis Interviews were audio recorded and the recordings were transcribed verbatim.In total, 70 pages of transcribed text (12-point font size) were gathered from the interviews.The transcribed data were analysed qualitatively using content analysis (Patton, 2015;p.541).The analysis process was inductive, and the unit of analysis was unity of one thought or meaning.The length of this unity was usually a sentence or at least a few words.Qualitative data analysis software Atlas.ti was used at the beginning of the analysis to organize the data. Both investigator triangulation and data triangulation were used (Patton, 2015, p. 316).Only one researcher conducted the analysis but discussed the categories closely with the other researchers; therefore, all authors were involved.The main data were derived from the interviews, whereas the questionnaire data were used to verify and identify possibly inconsistent contents, meaning that data triangulation was used to a lesser degree. Experiences of the TeSLA e-authentication system Willingness to share personal data All participants had opportunities to test all instruments, including sharing voice, video of their face and written text, and typing.One student with hearing impairment declined to do the voice recognition, but all other participants attempted to use all instruments.Generally, they felt quite comfortable with sharing their personal data for authentication and had positive perceptions of the different ways of sharing data. In the pre-questionnaire (before testing) and post-questionnaire (after testing), students were asked which types of personal data they were willing to share in the e-authentication process.The question was not directed to any specific situation, but into student's study context in general.When answering the post-questionnaire all students had the experience of using TeSLA e-authentication instruments.In Table 3, the plus sign (+) stands for a student's willingness to share information and the minus sign (-) stands for a student's unwillingness to share.The situations before and after testing are separated with a slash (before/after).Students were more willing to share text-based information than biometric data, including picture or audio data (Table 3). Developing E-Authentication for E-Assessment -Diversity of Students Testing the System in Higher Education Sanna Uotinen, Tarja Ladonlahti The questionnaire data showed that there were no big differences in students' willingness to share information about themselves before and after the testing.Seven out of 15 participants did not change their attitudes at all.In all, students were slightly more willing to share information about themselves after the testing than before the testing.Five out of 15 students developed more positive attitudes towards the instruments, with the audio recording of the voice demonstrating the most Sanna Uotinen, Tarja Ladonlahti, Merja Laamanen European Journal of Open, Distance and e-Learning -Vol.23 / No. 2 107 ISSN 1027-5207 © 2020 EDEN dramatic positive change.Finally, after testing, 5 out of 15 students were willing to share all personal data for which they were asked.Only the request to share a still photograph of the face and audio recording of the voice were responded to with less willingness by some students after testing (Table 3).Similar answers were given in the interviews and questionnaires.Some students experienced sharing biometric data as unpleasant.There were more doubts about sharing video of the face or voice than about typing.Students were unaccustomed to sharing video for identification; thus, doing so gave them an uncomfortable feeling.One student stated, "It [face recognition] was somehow annoying because is something that I am not used to doing.In addition, I think it is maybe a bit intrusive.Usually, it is [identity] numbers and something like that, which you give online, but this is maybe too sensitive because there is the face picture" (S8).One student pondered that someone might have a camera phobia and a video would not be pleasant.However, a student with hearing impairment said that she was so attuned to using the camera with her friends that she had no problem with sharing a video of her face. Moreover, upon seeing her face on the screen, one participant began to feel uncertain.Thus, showing one's face was experienced uncomfortable, and reading aloud into a microphone was strange for some students.One student mentioned, "if you have to read or speak out load or show your own face, so those felt quite strange (...)" (S12).Another student said, "Maybe giving the voice sample was a bit weird or felt the most unfamiliar.Otherwise, everything was okay" (S15).One student with hearing impairment stated that she did not want to do the voice recognition.Her first language was sign language and she was not used to using her voice.The other student with hearing impairment confirmed that deaf students seldom want to use their voice, and in this kind of recognition system, one should always have an option for this.However, he was willing to provide the voice recording: "(…) because I was born with hearing and later on, I lost it, so I am able to use my voice" (S2). It was mainly the video of the face and voice recording which aroused doubts, but for some students, typing was awkward as well.A student with learning difficulties mentioned that writing was difficult for her; therefore, speaking or using video for recognition were easier for her.In addition, a student who used sign language had doubts about writing: "It [typing] is challenging.(...) I think the uncertainty is because of the Finnish language.At least for me, when I must write, I feel insecure.(…) If I could just use sign language, it would be more natural" (S1).However, participants described sharing typing samples and written text samples for identification in a rather neutral way. Ease of use of the system Many students expressed overall positivity towards the TeSLA system.Students acknowledged that it was interesting, and they perceived it as the current way and part of modern daily life."It was quite fun.I haven't done anything like this before, so it was nice" (S4)."Well, it was new and special" (S6).In addition, some students thought that it was no big deal and stated that they had Sanna Uotinen, Tarja Ladonlahti, Merja Laamanen European Journal of Open, Distance and e-Learning -Vol.23 / No. 2 108 ISSN 1027-5207 © 2020 EDEN used similar kinds of systems before.Some students described the use of TeSLA as "smooth", "simple" or "easy". There was also some hesitation towards the TeSLA system.Owing to the newness of such an authentication system to most participants, they needed time to get used to it.One interviewee was considering TeSLA for other students, and she pointed out, "I think some of the students will experience this as strange -having to share picture and voice and all" (S15).Students mentioned that at first, there might be some doubts and suspicions regarding the new technology.Some students described the screen layout as unclear and stated that it was time-consuming to find the right way to proceed."But I don't -maybe you noticed that I didn't perceive right away where to get it and where I should go.I don't know if others can perceive it; is this general or is it just me?" (S11).Another student stated, "(…) Maybe it is because of my dyslexia that it was really difficult for me to piece together what is relevant here" (S4).An authentication system should be user-friendly and direct every user automatically. Participants perceived the ability to use different kinds of authentication instruments as an important factor.They were pleased that students had various possibilities for carrying out authentication."I think, I have a good feeling about all of this.It [TeSLA] is versatile.It is not just one but there is lots of different information, like voice, video and typing.It was nice" (S3).The possibility to use different authentication instruments was important also because of students' disabilities.Participants emphasized that the user should be able to decide which instrument is appropriate for them."(…) But if a person is deaf from birth, they necessarily won't want to use voice and, therefore, neither voice recognition, but if there are options for you to use just, for example, two instruments, then there is no problem.But if you must use all four instruments, then it is a barrier" (S2).Thus, a requirement to use all instruments was considered a problem. Technical characteristics of the system The technical reliability of the TeSLA system was mentioned as an essential element.It was seen as important that the technique itself be assured so that the student does not have to worry about using it.One student pointed out the following: "One must develop a guaranteed system where everything works" (S9).During the test situation, some technical difficulties occurred for some students, and these affected their experiences.Students wanted to be sure that when they have, for example, an exam, e-authentication does not complicate participation.Some instruments demanded several samples, which participants experienced as quite burdensome.Some main concerns were related to voice recognition, which required several voice samples recorded at the enrolment phase."But the voice sample was hard; you had to work for a surprisingly long time for your voice (...)" (S12).To share voice samples, students could freely say whatever they wanted to or could read a text aloud.In both cases, the TeSLA system informed the user when the number of samples was sufficient.One reason why this instrument required a lot work was that the voice samples needed to be continuous with no long pauses.It is typical for Finnish speakers to have some pauses while speaking.Furthermore, if one has difficulties in speaking or a slow pace of talking, voice recognition is challenging.Technical aspects of the TeSLA system received praise and, overall, the system was perceived as simple to use.In addition, participants made more specific observations.For example, a student with visual impairment pointed out that the keyboard shortcut buttons, which helped the user move around the screen, functioned well. More alternative ways to study The main advantage of e-authentication was the increased possibilities for online education, and this gives students more freedom to decide for themselves how and where to study.The interviewed students were mainly participating in face-to-face courses at the university, and they believed that e-authentication would enable more flexibility and new online education modes.The possibility of taking exams or listening to lectures at home was viewed very favourably."It would make it possibly to do that stuff at home; you are not so dependent on the particular place" (S4).A student with visual impairment mentioned his preference for writing exams at home: "If you have a disability, you want to have more control over the situation" (S13).It was also seen as important when a student has a special situation, such as if the student is sick or is on a trip/vacation.In addition, online education would make it possible to continue to study during an internship.All students in the questionnaire agreed that one of the benefits of e-assessment was being able to determine a time or place for taking an exam.However, some students emphasized that the option for face-to-face studying should remain.These students did not want to study strictly online.They wanted to meet their instructors one-on-one and liked studying in the university buildings.In the questionnaire, one student clarified that a disadvantage of e-assessment is that teacher-student contact might cease. Increased trust in studying Students stated that the e-authentication system might increase confidence both in students and teachers: "Perhaps it would create security on both sides" (S4).It was argued that it would foster trust in teachers towards students; for example, the teacher can be sure that it is the right person doing the exam and not someone else. Students' increased safety and trust were important benefits of e-authentication.Interviewees stated that it is to the students' benefit if they are better recognized because this increases safety and individuality.For a student, it is also important to know that no outsider can utilize a student's exams or assignments: "It increases every student's right to their own writings and assessments, and it makes it possible to recognize the student better, so it is like a protection (…) and my work is mine -nobody can snatch or use it" (S9). Students' own challenges Students had issues concerning their own ICT skills.Some students were uncertain about whether they could manage the e-authentication system by themselves.One student mentioned that her ICT skills were poor.She thought she would be nervous if she had to perform e-authentication at home before she could start to study.A student with reading and writing problems stated that it requires too much effort for her to first concentrate on e-authentication and then, only after that, to start studying: "(…) It is hard for me to concentrate on the main thing if, all the time, I am aware that I am being videotaped or I must give voice samples or something like that.It will produce extra tension and studying itself might become more difficult" (S12).One student also assumed that because of her learning difficulties, she had difficulties with technical activities and often needed help. When studying online, the possibility to save money and time was highlighted by the students.On one hand, being able to study at home would decrease the driving back and forth to the university, and this was an advantage.On the other hand, financial issues bothered some students.Eauthentication requires certain kinds of technical equipment and students questioned if everyone would have adequate equipment.In the questionnaire, 12 students expressed that they should have some extra technological equipment and eight students stated that they would need time to learn a new technology. Issues with security Some interviewees pondered issues relating to security and data protection in the internet.The opinions were mixed.Students who did not have doubts about security and data protection said that they were used to sharing information about themselves.They had good trust in their security. Students who had doubts about security did not focus so much on the system that they were testing as on the internet overall.One student was sceptical about privacy and data protection and was critical about these issues at the university."The more I read about data protection, the more suspicious I am about how the data can be protected" (S6).Students were aware that the internet has lots of information about them and this knowledge did not always feel good.In the questionnaire, one student commented that possible security breach or information misuse are disadvantages of e-assessment. Discussion A qualitative study design was used to investigate aspects of accessible e-authentication in higher education.The focus of the study was on the perceptions of students with SEND testing the TeSLA e-authentication system.Thus, the study was intended to shed light on students' experiences of testing the system as well as their opinions about the benefits and challenges of using the eauthentication.The main results of the study are discussed as follows. When describing their perceptions of e-authentication, students focused relatively little on their own disabilities or special educational needs and, instead, brought up more general issues.Aspects relating to the students themselves focused mainly on possibilities and skills to study.Students believed that the e-authentication system would increase their possibilities for pursuing online education, and they were pleased about this.They saw online education as a positive development, which allows more flexible and individualized ways to study.As in previous studies, online education was experienced as beneficial by students with SEND (Kent et al., 2018;Verdinelli & Kutner, 2016). Besides technical reliability, accessibility issues must be managed when developing an authentication system, such as ensuring adequate colour contrast, a clear font and alternative text for images (Macy et al., 2018).These features are beneficial for all students, but they are necessary for students with SEND (Betts et al., 2013;Macy et al., 2018).Equally important is that the overall layout of the e-authentication system and the DLE are understandable and self-directed. In general, students were quite comfortable with, and willing to, share their personal data for eauthentication.This result differs from Okada, Whitelock, et al.'s (2019) quantitative study, where students with disabilities had various concerns towards e-authentication.However, as the results show, it is difficult to predict what kind of personal data students are willing to share.The results of this qualitative study suggest that students' views on the e-authentication instruments varied and each student had individual preferences for certain instruments. Limitations and future research In the present qualitative study, the sample size (N = 15) was rather small.Students, who expressed some interest in the study area to begin with, volunteered to participate.Therefore, students with highly critical views on e-authentication may not have participated in this study.The data from some interviewees was quite limited.At the same time, the relatively short interviews yielded important information about specific student experiences.The use of questionnaire data and consistent results with the interview data also confirmed the findings. The data were gathered from students in a test situation, which must be considered when evaluating the results.During the test, students had access to help and support.The situation would be different if students were using the e-authentication system by themselves.In addition, the TeSLA e-authentication system was still in development during the data collection period and some technical problems occurred.Nevertheless, it is important to study user experiences when a system is still under development.Only by testing the system individually is it possible to determine its weak spots in terms of accessibility. Overall, this study raises questions about how an e-authentication system works in practice when it is a regular part of online education.Future research should continue to focus on user/student experiences and determining what kind of information and help students need and the possible challenges that they may face. Conclusion Students' e-authentication in e-assessment and online education is a current issue for higher education institutions.Reliable e-authentication presumably assures the status of online education, and this opens up possibilities for many people to study.This study contributes to the existing research by shedding light on the experiences of students with SEND of testing an eauthentication system.Students had positive expectations of e-authentication.However, the findings raise issues for higher education staff to consider.It is worth noting that when integrating an e-assessment system in DLE, higher education institution must be aware of accessibility issues.Not all e-authentication instruments are easy to use or accessible for every student.The strength of Developing E - Authentication for E-Assessment -Diversity of Students Testing the System in Higher Education Sanna Uotinen, Tarja Ladonlahti, Merja Laamanen European Journal of Open, Distance and e-Learning -Vol.23 / No. Developing E-Authentication for E-Assessment -Diversity of Students Testing the System in Higher Education Sanna Uotinen, Tarja Ladonlahti, Merja Laamanen European Journal of Open, Distance and e-Learning -Vol.23 / No. 2 104 ISSN 1027-5207 © 2020 EDEN Figure 1 . Figure 1.Students' steps for testing the system for E-Assessment -Diversity of Students Testing the System in Higher Education Sanna Uotinen, Tarja Ladonlahti, Merja Laamanen European Journal of Open, Distance and e-Learning -Vol.23 / No. Developing E - Authentication for E-Assessment -Diversity of Students Testing the System in Higher Education Sanna Uotinen, Tarja Ladonlahti, Merja Laamanen European Journal of Open, Distance and e-Learning -Vol.23 / No. Developing E - Authentication for E-Assessment -Diversity of Students Testing the System in Higher Education Sanna Uotinen, Tarja Ladonlahti, Merja Laamanen European Journal of Open, Distance and e-Learning -Vol.23 / No. Table 1 . Instruments integrated in the TeSLA system Table 2 : TeSLA instruments' enrolment and assessment tasks for the participants Table 3 : Students' (N = 15) willingness to share information about themselves before and after testing the e-authentication system
2020-12-15T14:38:44.942Z
2020-12-10T00:00:00.000
{ "year": 2020, "sha1": "09c03eeccb62a20cb54e514a0790fe3d6056d4e2", "oa_license": null, "oa_url": "https://www.sciendo.com/pdf/10.2478/eurodl-2020-0013", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "aba7ee03008849dd4ca6419582107f96c5e126a0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
226960616
pes2o/s2orc
v3-fos-license
Application of response surface methodology (RSM) in statistical optimization and pharmaceutical characterization of a patient compliance effervescent tablet formulation of an antiepileptic drug levetiracetam The main objective of the present study was to develop and optimize an effervescent tablet of levetiracetam, an antiepileptic drug, using central composite design with response surface methodology (RSM).The present investigation helps to overcome the problem associated with levetiracetam tablets and liquid dosage forms with children and elderly people like bad taste and swallowing difficulties. It also facilitates as an alternative manufacturing process for advanced patented technology like 3D printing process employed in SPRITAM® tablet. Levetiracetam effervescent tablets were prepared by dry granulation (roll compaction) method using water-soluble excipients and optimized by central composite rotatable design (CCRD) using two variables (citric acid and effersoda) at two levels (high and low). Overall, fourteen formulation trials were generated through statistical software Minitab 17.3.0 placing 6 center points, 4 cube points, and 4 axial points. All formulations were subjected to compression using single punch machine. Quality attributes of compressed tablets were evaluated using various compendial and non-compendial tests. RSM was used to observe the responses like effervescent time, hardness, and friability of the prepared tablet batches for different levels of all the variables. Polynomial equations were developed, and model plots (contour plot and 3-dimensional model surface plots) were generated to study the impact of acid-base couple on the responses. Finally, the optimized formulation was selected on the basis of desired effervescent time, hardness, friability, percent drug release, and drug content. From the studied RSM design, it was observed that small changes in the independent variables (citric acid and effersoda) correlate with shifts in the dependent variables, i.e., the desired responses. The study reveals that all the independent variables (citric acid and effersoda) and dependent variables (effervescent time, hardness, and friability) have a good correlation as indicated by good linear regression coefficient of 0.9808, 0.9939, and 0.9892 for effervescent time, hardness, and friability respectively. Levetiracetam effervescent tablets are satisfactorily prepared by dry granulation (roll compaction) approach. All desired critical quality attributes were found to be satisfactory. The applicability of RSM with desirability function in optimizing the levetiracetam formulation has made it possible to identify the impact of various independent variables and explore their effect on required responses. Background Pharmaceutical tablets are still the most accepted and dominant dosage form for drug delivery, occupying twothirds of the global market. Various strategies have been used for manufacturing of tablets including wet granulation, dry granulation, and direct compression. From all these three manufacturing process, dry granulation is the most preferred manufacturing process for moistureand heat-sensitive drug molecules. Roller compaction is the perfect technique for dry granulation in the pharmaceutical industry. The compaction process involves compaction of powder blend followed by compression in a tablet press [1]. In global pharmaceutical market, various grades of excipients are available with unique physico-chemical properties. Selecting suitable excipients for tablet manufacturing is always a crucial process. Proper selection of excipients and their level would deliver cost-effective and reproducible product [2]. Nowadays statistics is an integral part of any research. Thus, efforts have been made to use several statistical designs for development and optimization of different pharmaceutical products. The most commonly used statistical designs in the pharmaceutical development are response surface model [3], factorial design (full or partial) [4], and Taguchi orthogonal array design [5]. Response surface design (RSD) is the most common advanced design used for statistical process control and design of pharmaceutical formulations. It enables the user to develop, improve, and optimize the process parameters by controlling the required responses. Screening of factors is always an important task while implementing any statistical designs. Most commonly, the factors are identified or screened by screening designs like Plackett-Burman or factorial design. RSD, particularly, is selected when we have the important factors in our hand that are affecting our response or when we are expecting curvature in our response. There are two types of response surface design available, i.e., central composite design (CCD) and Box-Behnken design [6]. The central composite design was used when a sequential experimentation is required whereas Box-Behnken design was used when there is a requirement of fewer design points. CCD is the most pertinent design with RSM that basically involves cube points, center points, and axial points. Such variety of points arranged the experiments in a block arrangement [7,8]. The blocks formed by the central composite design may be orthogonal or rotational. The orthogonal block, created by the central composite design, analyzes the required variables independently and minimizes the variation of regression coefficient, whereas the rotational block provides constant prediction of the variance of all points that are situated at equal distance from the center point [9,10]. RSM was first researched by Box and Hunter [10,11] and comprised of statistical models with full polynomial or quadratic relation with the studied variable and response [12][13][14]. There are several diseases exist that affect the nervous system. Epilepsy, a neurological condition, has become one of the most affected serious and chronic noncommunicable disease condition of the brain with 50 million people affected worldwide. In global market, various anti-epileptic medications are available. Levetiracetam is an antiepileptic drug and acts as presynaptic calcium channel inhibitor. It is used as a monotherapy treatment for partial, monoclonic, and tonic-clonic seizure. It has been used orally as a tablet and solution dosage forms [15,16]. After oral administration, levetiracetam generally takes 1 h for onset of action. The reported oral bioavailability of levetiracetam has been observed more than 95%. This is due to its rapid absorption after oral administration [17]. Most of the available tablets and other oral dosage forms of levetiracetam are bioequivalent due to its high bioavailability [18]. Levetiracetam has been available in the market as film-coated tablets as well as oral solution. Its tablets are available in 250, 500, 750, and 1000 mg strength in the market. Due to high drug content, the tablets are apparently bigger in size which creates problem during swallowing. The problem associated due to bigger size of the tablet is quite significant to children, elderly people, and handicapped patients [19]. The alternative of administration of this medication is that the tablets may be crushed and placed in food or down a nasogastric tube for those unable to swallow whole tablets. If this is done, it leads to bitter taste resulting from exposing the drug powder [20]. Another option for administration of levetiracetam is oral solution dosage form. However, the solution dosage form is not usually preferred due to its short shelf life, unpleasant taste, and uncontrolled dose intake. Thus, there is a requirement of user friendly, patient acceptable, long shelf life, and fast release levetiracetam formulation. However, SPRITAM® is available in the market as tablet for suspension system. SPRITAM® is intended to disintegrate in the mouth when taken with a sip of liquid. It utilizes a patented new technology called as 3D printing process to produce solid oral quickly dispersible porous tablets. The most important consideration of this technology is that it produces very porous structure of prepared tablets that enables it to quickly disintegrate or disperse in the mouth immediately with a sip of water. However, the most disadvantage of this technology is that it is not feasible for all pharmaceutical manufacturing units as the process was patented, complicated, and not cost-effective. Thus, there is requirement of alternative manufacturing process or technology of 3D printing process which is easy to manufacture, cost-effective, and economic. Effervescent tablet dosage form has been paid attention as an alternative dosage form for conventional tablet dosage form especially for pediatric and geriatric patients. The advantages of effervescent formulation suggested ease of use, non-invasive administration, ease of self-medication, and improved patient compliance by use of suitable sweetener and flavor. The idea of effervescent formulation came from the need of a suitable dosage form which helps the children and the elderly patient for administration. In this study, we attempt to develop effervescent tablets of levetiracetam using water-soluble excipients. The present work holds the advantage of both tablet and liquid dosage form and also helps to overcome the problem encountered in oral tablet and liquid dosage forms. The present study can also be used as an alternative manufacturing process of 3D printing process employed in SPRITAM® tablet. In the present investigation, RSM was utilized to design and optimize the effervescent formulation of levetiracetam, and the effects of "acid: base" couple was accessed on effervescent time, friability, and hardness. Experimental design of levetiracetam effervescent tablets using CCRD CCRD was used for the development and optimization of immediate effervescent levetiracetam tablets using software Minitab 173.0. Two factors having five levels (0, ± 1, and ± α) of formulation variables were used, designated as X1: citric acid (320-960 mg) and X2: effersoda (320-960 mg), keeping all other excipients and drug constant. RSM was used to evaluate the influence of independent variables (i.e., the factors) on dependent variables (i.e., response) variables of Y1 (hardness), Y2 (friability), and Y3 (effervescent time). The model was further investigated through the multiple linear regression analysis (MLRA), analysis of variance, and percent coefficient variance. A total of 14 experimental runs were generated having 6 center points, 4 cube points, and 4 axial points. The different levels of variables are given in Table 1.The coded and uncoded factors of experimental design are presented in Table 2. For each batch of formulation, drug and other ingredients were accurately weighed (Table 3) using digital balance LC/ GC (AXIS) and sifted through sieve #40 and #60. The prepared lubricated blends were compacted by using roll compactor made up from Kevin Mini roller compactor. The compacts were then crushed using an oscillating granulator mill fitted with 1-mm screen and sifted through sieve no. #60 to get appropriate sized granules. The prepared blends were mixed with extra granular material such as mannitol, sorbitol, orange flavor, aspartame, and colloidal silicon dioxide, and finally lubricated with sodium acetate and sodium benzoate. The prepared final blends of each trial subjected for compression using single punch machine (17 stations, double rotary tablet compression machine (Cadmack, CMB4-MT)). All operations were carried out at lower humidity condition, i.e., at less than 30% RH to avoid moisture uptake of excipients from the atmosphere. Compatibility studies using Fourier transform infrared (FTIR) spectroscopy The pure drug levetiracetam and its effervescent formulation were examined for compatibility study employing FTIR (Shimazu Ltd., Japan) spectroscopy. All samples were powdered properly and mixed with potassium bromide, in a ratio of 1:5 (sample:potassium bromide) under infrared light network. The potassium bromide disks were prepared by compacting the powders with a pressure of 5 t for 5 min. The prepared samples were scanned at a range of 4000 to 400 cm −1 with the resolution of 4 cm − 1 [21]. Evaluation of blend for pre-compression parameters The blend was characterized for certain precompression parameters like bulk density, tapped density, Carr's compressibility index, Hausner's ratio, and angle of repose [21][22][23][24]. The percentage moisture content present in the blend was determined by using sartorius moisture analyzer. For measurement of loss on drying, about 1 g of sample was placed in a pan and heated up to 10 min at 105°C temperature. Evaluation of tablets for post compression parameters Evaluation of quality attributes are of prime importance for any pharmaceutical formulation. The critical quality attributes of tablets were evaluated by using different pharmacopeial and non-pharmacopeial tests. The effervescent levetiracetam tablet formulations were evaluated for various quality control tests such as weight variation, hardness, friability, disintegration, and drug content as per the respective standard procedures. Weight variation was performed by randomly selected ten tablets individually from the running batch, hardness was determined by taking twenty tablets from each formulation using Schleuniger hardness tester (Schleuniger & Co., Switzerland), friability of the tablets was evaluated over 6.5 g of sample at 25 rpm for 4 min using Roche friabilator (Electro lab) [25][26][27]. The effervescent time was measured by placing an effervescent tablet in a standard volume of water (approx. 120 to 180 ml) at room temperature and recording the moment when the solution became completely transparent [21,28]. Drug content analysis was measured by randomly taking ten tablets and crushed in mortar and pestle. Powder content equivalent to weight of one tablet was taken in a 50-ml volumetric flask, extracted with phosphate buffer pH 6.8. The mixture was filtered with Whatman filter paper, suitably diluted, and drug content was measured using UV spectrophotometer (UV-1800 Shimadzu Corporation Kyoto, Japan) at λmax 205 nm [29][30][31]. Amount of carbon dioxide content [32,33] In a beaker, 10% sulfuric acid (equivalent to 1 N) solution was prepared by taking 6.9 ml of concentrated sulfuric acid in 250 ml of distilled water. One effervescent tablet was placed in 100 ml of prepared sulfuric acid solution. The weight changes were measured, and amount of carbon dioxide generated was determined from the observed weight difference of the sample at the end of effervescence. In vitro dissolution study [31,34,35] The dissolution test of the prepared levetiracetam effervescent tablets was performed in phosphate buffer pH 6.8 ± 0.1 as dissolution medium maintained at a temperature of 37 ± 0.5°C at 50 rpm using the USP dissolution rate test apparatus type II (paddle type). A sample of 5 ml was withdrawn every 5 min interval up to 30 min replenishing with 5 ml to maintain the constant volume after each withdrawal of sample. The sample was filtered through Whatman filter paper (0.45 micron), and then the absorbance of the sample was measured at 205 nm using a spectrophotometer (UV-1800 Shimadzu Corporation Kyoto, Japan). The amount of drug released was calculated from a previously prepared calibration curve of levetiracetam using phosphate buffer pH 6.8 ± 0.1 as a blank. The absorbance was measured at wavelength 205 nm as levetiracetam is showing maximum absorbance at this wavelength. As the selected wavelength near to UV-visible range, it was further verified. For verification, placebo blend (blend having all excipients and without API) was taken, and absorbance was measured at wavelength 205 nm. It was seen that no interference of placebo blend was observed. Thus, this wavelength was further taken into consideration for dissolution and drug content measurement. Results In the present study, levetiracetam effervescent tablets were prepared using water-soluble excipients. The product development was carried out by using levetiracetam as an API, anhydrous lactose as a diluent, sorbitol as natural sweetener as well as diluent to enhance compressibility, mannitol as a sweetener for creamy texture as well as to enhance binding capacity, aspartame again as a sweetener, orange flavor as a flavoring agent, colloidal silicon dioxide as glidant, citric acid as acidic agent, effersoda as an alkali agent, and sodium acetate and sodium benzoate as lubricants. The coded and uncoded levels of variables for combinations generated by the minitab software 17.3.0 are shown in Tables 1 and 2. The final compositions of all the executed trials are given in Table 3. Compatibility studies using FTIR The FTIR spectrum of levetiracetam showed various characteristic bands, such as -NH stretching (amine) at 3360 cm −1 , -C=C stretching (aromatic) at 2991.59 cm −1 , -C=O (carboxylic acid) at 1672.28 cm −1 , and -CH stretching (alkyl) at 2939.99 cm −1 (Fig. 1a). The FTIR spectrum of the physical mixture of the levetiracetam and studied excipients has shown all the characteristic bands of levetiracetam that indicate the compatibility between levetiracetam and the studied excipients (Fig. 1b). Evaluation of blend properties The micromeritic parameters for all the formulation blends were performed, and the results are depicted in Table 4. The angle of repose (35.24-45.04°), Hausner's ratio (1.23-1.37), and Carr's index (18.52-26.76) were found to be within the specified limits. Evaluation of post-compression parameters All the formulations were subjected to compression after preformulation studies. The compressed tablets were evaluated for different quality control parameters and are presented in Table 5. The results have shown that the different tests such as weight variation, hardness, friability, effervescent time, amount of carbon dioxide content, drug content, and in vitro drug release were within acceptable limits. Experimental design In the present investigation, RSM was selected for optimization. The details of the dependent and independent factors are presented in Tables 1 and 2. As per the RSM, 14 formulations were formulated and evaluated for their response variables, i.e., hardness, friability, and effervescent time. Quadratic models were applied to study the relationships of factors on hardness, friability, and effervescent time. Statistical model summary of response variables are presented in Tables 6, 7, and 8. The lack of fit test is performed to analyze the variation between the fitted value and the obtained value. The significant of lack of fit and the R 2 value of the responses were also estimated. RSM effect of citric acid and effersoda on hardness The result of hardness study is presented in Table 5. 0.9683, the above equation showed a good fit to the response variable (hardness). The result of regression analysis for hardness showed a positive sign for citric acid and effersoda. This suggested that with an increase in the amount of citric acid and effersoda concentration, the hardness increases. ANOVA analysis of the model suggested that the independent variables had significantly affected (p < 0.05) in predicting the response (hardness), and the coefficient terms with p value less than 0.05 had a significant effect on the prediction efficacy of the model. The discussed relationship of independent variables (citric acid and effersoda) on the response hardness can be easily illustrated from the contour plot and 3D model surface plot of hardness as shown in Figs. 2 and 3. RSM effect of citric acid and effersoda on friability The result of friability study is presented in Table 5. From the CCD study, it was noted that friability of all formulations were found to be within the acceptable limits of NLT 1% (0.44 to 0.92%). The predicted friability values are expressed in the following equation: Friability (%) = 2.1726 − 0.002121 citric acid (mg) − 0.003349 effersoda + 0.000001 citric acid (mg)*citric acid (mg) + 0.000002 effersoda*effersoda + 0.000001 citric acid (mg)*effersoda The equation clearly indicated that both the factors citric acid and effersoda had a negative sign. Thus, it can be interpreted that both the variables individually have opposing effect. Based on the values and signs of the coefficients, it can be concluded that the concentration of citric acid and effersoda had a negligible effect on friability. ANOVA analysis of the model equation generated suggested that the independent variables had significantly affected (p < 0.05) in predicting the response (friability). However, from the contour and 3D surface plot, it can be identified that the required friability was achieved in the range of citric acid:effersoda in 640:640 mg (i.e., 1:1 ratio). The standard error of the regression equation (S) represents how the regression equation fitted the data. The S value represents the relation between actual and predicted response in Table 7. Contour plot and 3D model surface plot of friability are illustrated in Figs. 4 and 5 respectively. RSM effect of citric acid and effersoda on effervescent time Effervescent time is an important evaluation parameter for any effervescent tablet formulation. Thus, the above study also included effervescent time as a repose parameter. From the study, the predicted effervescent time values are expressed in the following equation: Discussion Each of the levetiracetam effervescent tablets consists of a particular composition of citric acid and effersoda. The results of the FTIR study demonstrates that the excipients used in the present study are compatible with levetiracetam and there is no interaction between levetiracetam drug and other ingredients. It is always critical and tedious to develop and optimize complex manufacturing processes of any dosage form. Thus, efforts have been made to utilize different statistical designs to overcome these types of problems. Among various experimental designs, RSD is the most common advanced design used for statistical process control and From the preformulation study, it was observed that majority of formulations exhibited fair angle of repose, reflecting excellent or good flow abilities as per USP general chapter [21]. However, the compressibility of blends were coming under fair (16)(17)(18)(19)(20) to passable (20)(21)(22)(23)(24)(25) category. While there were variations in the bulk density of the formulations, the tapped density of all the formulations were found to be close to each other. It has been observed that as the concentration of citric acid and effersoda increased, the flow property decreased. This is due to very poor rheological properties of both citric acid and effersoda [36]. The final lubricated powder blends were used for the development of levetiracetam effervescent tablets. The formulated effervescent tablets were evaluated for various parameters, such as weight variations, hardness, friability, effervescent time, amount of carbon dioxide released, drug content analysis, and percentage of drug release. Weights of tablet layers of all formulations (C1-C14) were in close proximity to the actual value. It has been observed that the percentages of drug release for all formulations were found to be more than 90%. It is due to BCS class I solubility characteristics of levetiracetam which enables higher solubility of levetiracetam across the pH. The release of carbon dioxide study of the formulations was carried out by using alcalimetric method. In this method, the water is taken by the sulfuric acid and thus, the determination is more accurate and exact [33]. From the study, it has been observed that apparently a lower amount of carbon dioxide is being released for all formulations. This may be due to use of hygroscopic diluents such as sorbitol and anhydrous lactose. The high absorption capacity of these diluents causes beginning of an effervescent reaction slow. A similar type of result was reported in a study while developing the effervescent formulation containing sorbitol as a diluent [37]. It has been also observed that the amount of carbon dioxide released for all formulations were similar and of close proximity to each other. For the optimization of citric acid and effersoda ratio, the effect of variables on hardness, friability, and effervescent time was investigated through polynomial equation and response curves. The RSM effect of each response was studied with respect to the independent variable citric acid and effersoda. From the study of RSM effect of citric acid and effersoda on hardness, it has been observed that the hardness vary from 123 to 158 N. The observed tablet hardness was relatively higher. This result is due to the properties of used excipients such as anhydrous lactose, mannitol, and starch. All these excipients are having high-binding capacity, good compressibility which enables to produce quality, robust tablets at lower compression forces. Moreover, anhydrous lactose has a lower lubrication property which enables high elastic modulus and tensile strength. In the formulation, higher amount of anhydrous lactose was used which reduces surface irregularities and increases the tensile strength. A study by Sun et al. also obtained similar results by using lactose and mannitol as a diluent [38]. There was a clear indication from the results that the ratio of citric acid and effersoda imposes a great impact on the response variable hardness. A trend of increase in hardness of the tablet was observed with increase in concentration of citric acid and effersoda ratio from 187.5 to 640 mg. The increase in tensile strength of the tablets with increase in concentration of citric acid is attributed due to the physical properties of citric acid. The unique coarser particle size properties of citric acid support the bonding between the particle surfaces and resulted in a stronger compact. As a result with increase in concentration of citric acid:effersoda ratio, the hardness also showed an increasing or upward trend. The results obtained were in consonance with the study conducted by Sun et al. reporting a relation of citric acid with hardness [38]. However, it has been observed that the required hardness was achieved when citric acid:effersoda was used in 640:640 mg (1:1 ratio). Higher amount of both the ingredients decreases the hardness. This could be attributed due to the plasticizer properties of citric acid when used in larger quantity. As the citric acid concentration was increased, the citric acid present in the blend played a role of plasticizer, which decreased the interactions among the macromolecules and resulted in the decrease of the tensile strength [39]. The diminution of hardness may also be endorsed due to the plastic deformation of sodium bicarbonate when compressed. The results obtained were in agreement with the early work with sodium bicarbonate suggesting plastic deformation [40]. From the RSM effect of citric acid and effersoda on friability, it was observed that all formulations were within the friability limit of not more than 1%. A direct relationship between percentage of friability and tablet hardness was observed. Study has also reported that there is a direct relationship that exists between percentages of friability with the tablet hardness [41]. From the analyzed data, it was inferred that friability was less when the concentration of both the variables are in 640 mg:640 mg (1:1 ratio). However, higher friability was observed when the concentrations of both the variables are less than 640 mg, and same trend was observed with higher concentration of citric acid and effersoda. The full quadratic polynomial equation model used to measure the response friability revealed very negligible and small values of interaction effects, squared effects, and linear effects. The observed values of MLRA studies indicated that the obtained results of friability are mainly influenced by hardness. There was a clear indication from the results that there was no significant effect of both the independent factors individually on effervescent time. However, ratio of citric acid and effersoda imposes a great impact on the response variable effervescent time. A trend of decrease in effervescent time was observed with increase in concentration of both citric acid and effersoda from 640 to 1092 mg. This may be due to increasing amounts of citric acid and sodium bicarbonate that result in formation of carbon dioxide which helps to break up the tablet and accelerates faster reaction [28].The high hardness of the tablet also affected the effervescent time. In hardness study, it was observed that there was an increase in hardness of the tablets with increase in concentration of citric acid and effersoda ratio from 187.5 to 640 mg. On the other hand, higher amount of both the ingredients decreases the hardness. The similar trend was observed for effervescent time also. Thus, it was believed that the effervescent time was driven by both hardness and reaction of both citric acid and effersoda. Similar types of results were observed by Sun et al. [38] while developing fast dispersible fruit tablet made from mango, Chlorella, and cactus powder. The RSM effect of citric acid and effersoda on effervescent time revealed that higher effervescent time was observed with citric acid:effersoda in 640 mg:640 mg (1:1 ratio). Our observed effervescent time with citric acid: effersoda in 640 mg:640 mg (1:1 ratio) was found to be from 96 to 102 s. A lower effervescent time was reported with other combinations of citric acid and effersoda (68 to 83 s). Moreover, as per EP, the limit for effervescent time should be less than 5 min. All the effervescent tablet formulation had acceptable effervescent time value. However, based on the required responses, i.e., hardness, friability, and effervescent time, the formulation containing citric acid:effersoda in 640 mg:640 mg (1:1 ratio) was considered as optimized formulation. The standard error of the regression equation (S) which represents the relation between actual and predicted response was found to be 2.36032, 0.02878, and 2.3554 for hardness, friability, and effervescent time respectively. It represents the average distance of the data points from the fitted line and found to be 2.36%, 0.02%, and 2.35% respectively for hardness, friability, and effervescent time. The adjusted regression values (R 2 ) for hardness, friability, and effervescent time were 0.9939, 0.9892, and 0.9808 respectively. The lower S value and higher adjusted regression values (R 2 ) reflected the appropriateness or goodness of the model. Among all the manufactured formulations, C3, C4, C6, C8, C9, and C13 were selected as optimized formulations in view of reduced friability, acceptable hardness, and effervescent time. These 6 trials represent the center point of the studied model. Conclusion Optimized formulations of immediate release levetiracetam effervescent tablet with 20% of citric acid (640 mg) and 20% effersoda (640 mg) was successfully developed by dry granulation (roll compaction) using water-soluble excipients through CCRD with desired response attributes of effervescent time, hardness, and friability. A quadratic model was used to study the influence of formulation factors on response variables using RSM. Statistical analysis of proposed model showed good coefficient of regression for effervescent time (0.9808), hardness (0.9939), and friability (0.9892). F ratios of the regression for all test variables against response variables were significant. Predicted and actual results are in agreement with 95% confidence interval. For acceptance of children and elderly people, the bitter taste of the formulation was enhanced by using aspartame and orange flavor in a suitable concentration. The study demonstrated that CCRD with RSM is a systematic optimization approach which could be effectively used to study the inter-relationships of studied variables.
2020-11-16T14:39:55.857Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "57118ee420e7724f8e7aa4493f69c4750b7215a0", "oa_license": "CCBY", "oa_url": "https://fjps.springeropen.com/track/pdf/10.1186/s43094-020-00096-0", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "57118ee420e7724f8e7aa4493f69c4750b7215a0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
59481414
pes2o/s2orc
v3-fos-license
The Impact of Nigeria Microfinance Banks on Poverty Reduction: Imo State Experience This paper attempts to provide a critical appraisal of the debate on the effectiveness of microfinance as an effective tool for eradicating poverty and also the history of microfinance banks in Nigeria. It argues that while microfinance has developed some innovative management and business strategies, its impact on poverty reduction remains in doubt. Micro finance impact on poverty reduction in Imo state was studied by a stratified sampling method in the selection of the customers. The study area was divided into 16 sample units based on the various local government areas in Imo state. Four (4) MFBs were purposefully selected from each of the 3 Senatorial Zones, making a total of 12 MFBs. In order to have unbiased selection of samples, Three Hundred and eighty two questionnaires (382) were randomly distributed to customers of these selected microfinance Banks in the three senatorial Zones as follows, namely: Owerri (82), Okigwe (100) and Orlu (200). The result revealed that majority of respondents were male constituting about 78 % while women 22 % and majority of the respondents were married (65 %), single (33 %) divorced (2 % ). 137 of the respondents do not have any formal education, 67 possess primary school leaving certificate. 81 indicated having secondary school certificate. 71 with diploma / NCE and its equivalent. 28 of them have first degree certificate and above representing 36 %, 17 %, 21 %, 19 % and 7 % respectively. The monthly income brackets of the respondents show that One hundred and eleven (111) respondents (29 %) indicated earning N10,000 N15,000, 95 respondents or 25 % indicated N15,001 – N20,000 as their income bracket, 94 or 24 % were earning above N20,000, while 84 (22 %) indicated earning below N10,000. From the result, high income class has more capacity to save than poor dwelling in rural areas. The finding appears to support the predication of Economics theory of savings which argues that saving is a function of the level of income. The implication of this study is that the federal government of Nigeria and financial institutions in the country should take up the challenge of establishing bank branches in the rural areas or make formidable arrangement for supplying more credit to the rural dwellers. INTRODUCTION Since the 1970s, and especially since the new wave of microfinance in the 1990s, microfinance has come to be seen as an important development policy and a poverty reduction tool. Some argue (e.g. Littlefield et al. 2003; World Savings Bank Institute 2010) as reported by Adjei, Arun and Hossain (2009) that microfinance is a key tool to achieve the Millennium Development Goals (MDGs). The assumption is that if one gives more microfinance to poor people, poverty will be reduced. But the evidence regarding such impact is challenging and controversial, partly due to the difficulties of reliable and affordable measurement, of fungibility (Ashraf, Gine and Karla, 2008) the methodological challenge of proving causality (i.e. attribution), and because impacts are highly context-specific (Brau and Woller 2004;Hulme 1997; Hulme 2000; Makina and Malobola 2004:801;Sebstad and Cohen 2000). This provision of funds in form of credit and microloans empowers the poor to engage in productive economic activities which can help boost their income level and thus alleviate poverty in the economy. Shreinner (2001) defines microfinance as efforts to improve the access to loans and to savings services for poor people. It is currently being promoted as a key development strategy for promoting poverty reduction / eradication and economic empowerment. It has the potential to effectively address material poverty, the physical deprivation of goods and services and the income to attain them by granting financial services to households who are not served by the formal banking sector. Microfinance is an effective development tool for promoting pro-poor growth and poverty reduction. Financial services enable poor and low income households to take advantage of economic opportunities, build assets, and reduce their vulnerability to external shocks that adversely affect their living standards. The credit policy for the poor involves many practical difficulties arises from operation followed by financial institutions and the economic characteristics and financing needs of lowincome households. For example, commercial banking institutions require that borrowers have a stable source of income out of which principal and interest can be paid back according to the agreed terms. However, the income of many self employed households is not stable. A huge number of micro loans are needed to serve the poor, but banking institution prefers dealing with big loans in small numbers to minimize administration expenses. They also look for collateral with a clear title -which many low-income households do not have. In addition bankers tend to consider low income households a bad risk imposing exceedingly high information monitoring costs on operation. Three features distinguish microfinance from other formal financial products. These are: (i) the smallness of loans advanced and or savings collected, (ii) the absence of asset-based collateral, and (iii) simplicity of operations (see appendix 1). Ideally one can ascertain the impact of microfinance if the counterfactual what would have happened to a person who borrowed from a micro lender if he/she had not done so-can be easily tested. Many early studies compared borrowers with non-borrowers. But if borrowers are more entrepreneurial than those who do not borrow, such comparisons are likely to grossly overstate the effect of microcredit. Questions regarding the impact of microfinance on the welfare and income of the poor have therefore been raised many times. According to Chowdhury (2009) two recent studies attempted to overcome the problem of self-selection (i.e., the likelihood of people with entrepreneurial skills borrowing) by using randomized sample selection methods. That is, participation in a programme is determined essentially by chance. Contrary to the usual claims, neither study found that microcredit reduced poverty. Microcredit may not even be the most useful financial service for the majority of poor people. The MIT study by Banerjee, Duflo, Glennerster and Kinnan (2009) found no impact on measures of health, education, or women's decision-making among the slum dwellers in the city of Hyderabad, India. Similarly, the study by Dean and Zinman (2009) as reported by Chowdhury (2009) which measured the probability of being below the poverty line and the quality of food that people ate, found no discernible effects. The most-cited source of evidence on the impacts of microfinance is the early set of studies collected by Hulme and Mosley (1996). The findings of these studies are provocative: poor households do not benefit from microfinance; it is only non-poor borrowers (with incomes International Letters of Social and Humanistic Sciences Vol. 16 93 above poverty lines) who can do well with microfinance and enjoy sizable positive impacts. More troubling is the finding that a vast majority of those with starting incomes below the poverty line actually ended up with less incremental income after getting micro-loans, as compared to a control group which did not get such loans. Despite various studies, 'the question of the effectiveness and impact on the poor of (microfinance) programs is still highly in question' (Westover 2008). Roodman and Morduch (2009) reviewed studies on micro-credit in Bangladesh, and similarly conclude that '30 years into the microfinance movement we have little solid evidence that it improves the lives of clients in measurable ways'. Even the World Bank report Finance for all? (2007:99) indicates that 'the evidence from micro-studies of favorable impacts from direct access of the poor to credit is not especially strong. Several factors have accounted for the persisting gap in access to financial services. For instance, the distribution of microfinance banks in Nigeria is not even, as many of the banks are concentrated in a particular section of the country, which investors perceived to possess high business volume and profitability. Also, many of the banks carried over the inefficiencies and challenges faced during the community banking era. In addition, the dearth of knowledge and skills in micro financing affected the performance of the MFBs. Furthermore, there are still inadequate funds for intermediation owing to lack of aggressive savings mobilization, inability to attract commercial capital, and the non establishment of the Microfinance Development Fund. In Nigeria, a large percentage of the population is still excluded from financial services. The 2010 EFInA (Enhancing Financial Innovation & Access) study revealed a marginal increase of those served by formal financial market from 35.0 percent in 2005 to 36.3 percent in 2010, five (5) years after the launching of the microfinance policy. When those that had financial services from the informal sector such as savings clubs/pools, Esusu, Ajo, and money lenders were included, the total access percentage for 2010 was 53.7 percent which means that 46.3 percent or 39.2 million adult population were financially excluded in Nigeria (CBN, 2012). Against the backdrop of concerns expressed by stakeholders and the need to enhance financial services delivery, the 2005 Microfinance Policy, Regulatory and Supervisory Framework for Nigeria was Revised in April, 2011, and in exercise of the powers conferred on the Central Bank of Nigeria by the provisions of Section 28, sub-section (1) (b) of the CBN Act 24 of 1991 (as amended) and in pursuance of the provisions of Sections 56-60(a) of the Bank and Other Financial Institutions Act (BOFIA) 25 of 1991 (as amended). The policy recognizes existing informal institutions and brings them within the supervisory purview of the CBN creating a platform for the regulation and supervision of microfinance banks (MFBs) through specially crafted Regulatory Guidelines. United Nations Mandate for Microfinance and Poverty Alleviation The World Summit for Social Development (WSSD) in March 1995 articulated a global commitment by Governments to eradicate poverty as an ethical, social, political and economic imperative. Poverty eradication was one of three core themes of WSSD. The Programme of Action affirmed the primacy of national responsibility for social development, including poverty eradication, but also called for international support to assist governments in developing strategies. The Programme of Action suggested ways to involve civil society in social development and to strengthen their capacities. It called on Governments to mobilize resources for social development, including poverty alleviation. The WSSD Programme of Action was to be implemented within the framework of international cooperation that integrated the follow-up to then recent and planned UN conferences relating to social development, for example, the Children's Summit in 1990, the Environment and Development Conference in 1992, the Human Microfinance is one tool for poverty alleviation. The enabling environment influences the effectiveness of microfinance in the other four areas of poverty alleviation interventions. The UN organizations' mandates in the area of microfinance primarily lie in the area of technical assistance and demonstration of models that contribute effectively to poverty alleviation. The responsibility for provision of capital rests with governments, with support from bilateral donors and international financial institutions (Report of United Nations, 1995). 2. Objectives of the Study The objectives of the study include: 1. To examine the poverty situation in Imo State, Nigeria. 2. To investigate the activities of microfinance banks in Imo State Nigeria. 3. To examines the effectiveness of microfinance banks in the alleviating of poverty in Imo state, Nigeria. 4. Provide suggestions on how to solve the problems as a step towards enhancing the economic status of members, thereby serving to reduce the rate of poverty alleviation among them. 5. To find out if the income class of an individual affects his or her savings in Imo State. 6. To find out if microfinance bank credit lead to poverty reduction in Imo state. 3. Research Questions In order to pursue the objective of the study, the following research questions were formulated namely: 1. Do microfinance banks assist in promoting financial success of their customers? 2. Do microfinance banks help in encouraging savings in Imo state? 3. Do microfinance banks help in the alleviation of poverty in Imo State, Nigeria? Hypotheses were thereby formulated and stated in the null form as stated below: Ho: There is no significant relationship between the level of someone's income and access to financial services in Imo state. H1: There is a significant relationship between the level of someone's income and access to financial services in Imo state. 1. Microfinance in Nigeria The licensing of Microfinance Banks in Nigeria is the responsibility of the Central Bank of Nigeria. The practice of microfinance in Nigeria is culturally rooted and dates back several centuries. The traditional microfinance institutions provide access to credit for the rural and urban, low-income earners. They are mainly of the informal Self-Help Groups (SHGs) or Rotating Savings and Credit Associations (ROSCAs) types. Other providers of microfinance services include savings collectors and co-operative societies. The informal financial institutions generally have limited outreach due primarily to paucity of loanable funds. In order to enhance the flow of financial services to Nigerian rural areas, Government has, in the past, initiated a series of publicly-financed micro/rural credit programmes and policies targeted at the poor. Notable among such programmes were the Rural Banking Programme, sectoral allocation of credits, a concessionary interest rate, and the Agricultural Credit Guarantee Scheme (ACGS). Other institutional arrangements were the establishment of the Nigerian Agricultural and Co-operative Bank Limited (NACB), the National Directorate of Employment (NDE), the Nigerian Agricultural Insurance Corporation (NAIC), the Peoples Bank of Nigeria (PBN), the Community Banks (CBs), and the Family Economic Advancement Programme (FEAP). In 2000, Government merged the NACB with the PBN and FEAP to form the Nigerian Agricultural Cooperative and Rural Development Bank Limited (NACRDB) to enhance the provision of finance to the agricultural sector. It also created the National Poverty Eradication Programme (NAPEP) with the mandate of providing financial services to alleviate poverty. Despite these measures, it became increasingly evident that such governmental policies failed to grant financial access to those most in need (i.e. the rural poor) and that the programs were largely unsustainable. The CBN at the sixth Annual Microfinance Conference and Entrepreneurship Awards held recently in Abuja said the microfinance development fund would be established in 2012 and would include both commercial and social components that would enhance its operations and outreach. The fund will also aim at improving access to affordable and sustainable sources of finance by microfinance institutions and microfinance banks. This, being the second time that the CBN is making such pronouncement, the first one was pronounced at the 5th Annual Microfinance Conference and Entrepreneurship Awards in 2011, where Kingsley Moghalu, the deputy governor, financial system stability said the CBN would establish a microfinance development fund to promote accessibility to financial services for low income earners. Justification for the Establishment of Microfinance Banks in Nigeria Weak Institutional Capacity: The prolonged sub-optimal performance of many existing community banks, microfinance and development finance institutions is due to incompetent management, weak internal controls and lack of deposit insurance schemes. Other factors are poor corporate governance, lack of well defined operations and restrictive regulatory/supervisory requirements. Weak Capital Base: The weak capital base of existing institutions, particularly the present community banks, cannot adequately provide a cushion for the risk of lending to micro entrepreneurs without collateral. This is supported by the fact that only 75 out of over 600 community banks whose financial statements of accounts were approved by the CBN in 2005 had up to N20 million shareholders' funds unimpaired by losses. Similarly, the NACRDB, with The Existence of a Huge Un-Served Market: The size of the unserved market by existing financial institutions is large. The average banking density in Nigeria is one financial institution outlet to 32,700 inhabitants. In the rural areas, it is 1:57,000, that is less than 2% of rural households have access to financial services. Furthermore, the 8 (eight) leading Micro Finance Institutions (MFIs) in Nigeria were reported to have mobilized a total savings of N222.6 million in 2004 and advanced N2.624 billion credit, with an average loan size of N8,206.90. This translates to about 320,000 membership-based customers that enjoyed one form of credit or the other from the eight NGO-MFIs. Their aggregate loans and deposits, when compared with those of community banks, represented percentages of 23.02 and 1.04, respectively. This, reveals the existence of a huge gap in the provision of financial services to a large number of active but poor and low income groups. The existing formal MFIs serve less than one million out of the over 40 million people that need the services. Also, the aggregate micro credit facilities in Nigeria account for about 0.2 percent of GDP and less than one percent of total credit to the economy. The effect of not appropriately addressing this situation would further accentuate poverty and slow down growth and development. Economic Empowerment of the Poor, Employment Generation and Poverty Reduction: The baseline economic survey of Small and Medium Industries (SMIs) in Nigeria conducted in 2004, indicated that the 6,498 industries covered currently employ a little over one million workers. Considering the fact that about 18.5 million (28 % of the available work force) Nigerians are unemployed, the employment objective/role of the SMIs is far from being reached. One of the hallmarks of the National Economic Empowerment and Development Strategy (NEEDS) is the empowerment of the poor and the private sector, through the provision of needed financial services, to enable them engage or expand their present scope of economic activities and generate employment. Delivering needed services as contained in the Strategy would be remarkably enhanced through additional channels which the microfinance bank framework would provide. It would also assist the SMIs in raising their productive capacity and level of employment generation. The Need for Increased Savings Opportunity: The total assets of the 615 community banks which rendered their reports, out of the 753 operating community banks as at end-December 2004, stood at N34.2 billion. Similarly, their total loans and advances amounted to N11.4 billion while their aggregate deposit liabilities stood at N21.4 billion for the same period. Also, as at end-December 2004, the total currency in circulation stood at N545.8 billion, out of which N458.6 billion or 84.12 per cent was outside the banking system. Poor people can and do save, contrary to general misconceptions. However, owing to the inadequacy of appropriate savings opportunities and products, savings have continued to grow at a very low rate, particularly in the rural areas of Nigeria. Most poor people keep their resources in kind or simply under their pillows. Such methods of keeping savings are risky, low in terms of returns, and undermine the aggregate volume of resources that could be mobilized and channeled to deficit areas of the economy. The microfinance policy would provide the needed window of opportunity and promote the development of appropriate (safe, less costly, convenient and easily accessible) savings products that would be attractive to rural clients and improve the savings level in the economy. The Interest of Local and International Communities in Microfinancing: Many international investors have expressed interest in investing in the microfinance sector. Thus, the International Letters of Social and Humanistic Sciences Vol. 16 establishment of a microfinance framework for Nigeria would provide an opportunity for them to finance the economic activities of low income groups and the poor. Utilization of SMEEIS Fund: As at December, 2004, only N8.5 billion (29.5 %) of the N28.8 billion Small and Medium Enterprises Equity Investment Scheme (SMEEIS) fund had been utilized. Moreover, 10 % of the fund meant for micro credit had not been utilized due to lack of an appropriate framework and confidence in the existing institutions that would have served the purpose. This policy provides an appropriate vehicle that would enhance the utilization of the fund. 3. Policy Objectives The specific objectives of this microfinance policy are the following: 1. Make financial services accessible to a large segment of the potentially productive Nigerian population which otherwise would have little or no access to financial services; 2. Promote synergy and mainstreaming of the informal sub-sector into the national financial system; 3. Enhance service delivery by microfinance institutions to micro, small and medium entrepreneurs; 4. Contribute to rural transformation; and 5. Promote linkage programmes between universal/development banks, specialized institutions and microfinance banks. 4. Policy Targets Based on the objectives listed above, the targets of the policy are as follows: 1. To cover the majority of the poor but economically active population by 2020 thereby creating millions of jobs and reducing poverty. 2. To increase the share of micro credit as percentage of total credit to the economy from 0.9 percent in 2005 to at least 20 percent in 2020; and the share of micro credit as percentage of GDP from 0.2 percent in 2005 to at least 5 percent in 2020. 3. To promote the participation of at least two-thirds of state and local governments in micro credit financing by 2015. 4. To eliminate gender disparity by improving women's access to financial services by 5 % annually; and 5. To increase the number of linkages among universal banks, development banks, specialized finance institutions and microfinance banks by 10 % annually. 5. The Goals The establishment of microfinance banks has become imperative to serve the following purposes: 1. Provide diversified, affordable and dependable financial services to the active poor, in a timely and competitive manner, that would enable them to undertake and develop longterm, sustainable entrepreneurial activities; 2. Mobilize savings for intermediation; 3. Create employment opportunities and increase the productivity of the active poor in the country, thereby increasing their individual household income and uplifting their standard of living; 98 Volume 16 4. Enhance organized, systematic and focused participation of the poor in the socioeconomic development and resource allocation process; 5. Provide veritable avenues for the administration of the micro credit programmes of government and high net worth individuals on a non-recourse case basis. In particular, this policy ensures that state governments shall dedicate an amount of not less than 1% of their annual budgets for the on-lending activities of microfinance banks in favour of their residents; and 6. Render payment services, such as salaries, gratuities, and pensions for various tiers of government. 6. Establishment Private sector-driven microfinance banks shall be established. The banks shall be required to be well-capitalized, technically sound, and oriented towards lending, based on the cash flow and character of clients. There shall be two categories of Micro Finance Banks (MFBs), namely:  Micro Finance Banks (MFBs) licensed to operate as a unit bank, and  Micro Finance Banks (MFBs) licensed to operate in a state. The recognition of these two categories of banks does not preclude them from aspiring to having a national coverage, subject to their meeting the prudential requirements. This is to ensure an orderly spread and coverage of the market and to avoid, in particular, concentration in areas already having large numbers of financial institutions. An existing NGO which intends to operate an MFB can either incorporate a subsidiary MFB, while still carrying out its NGO operations, or fully convert into a MFB. MFBs Licensed to Operate as a unit bank (a.k.a. Community Banks): MFBs licensed to operate as unit banks shall be community-based banks. Such banks can operate branches and/or cash centers subject to meeting the prescribed prudential requirements and availability of free funds for opening branches/cash centers. The minimum paid-up capital for this category of banks shall be N20.0 million for each branch. MFBs Licensed to Operate in a State : MFBs licensed to operate in a State shall be authorized to operate in all parts of the State (or the Federal Capital Territory) in which they are registered, subject to meeting the prescribed prudential requirements and availability of free funds for opening branches. The minimum paid-up capital for this category of banks shall be N1.0 billion. 7. Organic Growth Path for MFBs 1. This policy recognizes that the current financial landscape of Nigeria is skewed against Micro, Small and Medium Enterprises (MSMEs) in terms of access to financial services. To address the imbalance, this policy framework shall promote an even spread of microfinance banks, their branches and activities, to serve the un-served but economically active clients in the rural and peri-urban areas. 2. The level of spread and saturation of the financial market shall be taken into consideration before approval is granted to an MFB to establish branches across the Local Government Areas and/or States, in fulfillment of the objectives of this policy. Specifically, an MFB shall be expected to have a reasonable spread in a Local Government Area or State before moving to another location, subject to meeting all necessary regulatory and supervisory requirements stipulated in the guidelines. This is to avoid concentration in already served areas and to ensure extension of services to the economically active poor, and to micro, small and medium enterprises. 3. In order to achieve the objectives of an organic growth path, a microfinance bank licensed to operate as a unit bank shall be allowed to open new branches in the same State, subject to meeting the prescribed prudential requirements and availability of minimum free funds of N20.0 million for each new branch. In fulfillment of this requirement, an MFB licensed to operate as a unit bank can attain the status of a State MFB by spreading organically from one location to another until it covers at least twothirds of the LGAs of that State. When an MFB has satisfactorily covered a state and wishes to start operations in another state, it shall obtain approval and be required to again grow organically by having at least N20 million free funds unimpaired by losses for each branch to be opened in the new state. 4. An MFB licensed to operate in a State shall be allowed to open a branch in another State, subject to opening branches in at least two-thirds of the local governments of the State it is currently licensed to operate in the provision of N20.0 million free funds and, if in the view of the regulatory authorities, it has satisfied all the requirements stipulated in the guidelines. 5. The regulations to be issued from time to time shall be such that would encourage the organic growth path of the MFBs at all times. 6. However, an MFB may wish to start operations as a State Bank from the beginning and therefore not wish to grow organically from branch to branch. Such an MFB may be licensed and authorized to operate in all areas of the state from the beginning subject to the provision of a total capital base of N 1 billion. In other words, the preferred growth path for MFBs is the branch by branch expansion to become State Banks. But anyone wishing to start as a big state institution from the beginning can do so subject to availability of N1 billion and proven managerial competence. 8. Ownership 1. Microfinance banks can be established by individuals, groups of individuals, community development associations, private corporate entities, or foreign investors. Significant ownership diversification shall be encouraged to enhance good corporate governance of licensed MFBs. Universal banks that intend to set up any of the two categories of MFB as subsidiaries shall be required to deposit the appropriate minimum paid-up capital and meet the prescribed prudential requirements and if, in the view of the regulatory authorities, have also satisfied all the requirements stipulated in the guidelines. 2. No individual, group of individuals, their proxies or corporate entities, and/or their subsidiaries, shall establish more than one MFB under a different or disguised name. Universal Banks: Universal banks currently engaging in microfinance services, either as an activity or product and do not wish to set up a subsidiary, shall be required to set up a department/ unit for such services and shall be subjected to the provisions of the MFB regulatory and supervisory guidelines. 2. Community Banks: All licensed community banks, prior to the approval of this policy, shall transform to microfinance banks licensed to operate as a unit bank on meeting the prescribed new capital and other conversion requirements within a period of 24 months from the date of approval of this policy. Any community bank which fails to meet the new capital requirement within the stipulated period shall cease to operate as a community bank. A community bank can apply to convert to a microfinance bank licensed to operate in a State if it meets the specified capital and other conversion requirements. Non-Governmental Organization -Micro Finance Institutions (NGO-MFIs): This policy recognizes the existence of credit-only, membership-based microfinance institutions which shall not be required to come under the supervisory purview of the Central Bank of Nigeria. Such institutions shall engage in the provision of micro credits to their targeted population and not to mobilize deposits from the general public. The registered NGO-MFIs shall be required to forward periodic returns on their activities to the CBN. NGO-MFIs that wish to obtain the operating license of a microfinance bank shall be required to meet the specified provisions as stipulated in the regulatory and supervisory guidelines. Transformation of the Existing NGO-MFIs: Existing NGO-MFIs which intend to operate an MFB can either incorporate a subsidiary MFB while still carrying out its NGO operations or fully convert into an MFB. NGO-MFIs that wish to convert fully into a microfinance bank must obtain an operating license and shall be required to meet the specified provisions as stipulated in the regulatory and supervisory guidelines. 10. Justification for the Capital Requirements 1. The present capital base of N5 million for community banks has become too low for effective financial intermediation. Indeed, to set up a community bank, at least N5 million is required for the basic infrastructure, leaving zero or a negative balance for banking operations. From a survey of community banks, an operating fund of N50 million is about the minimum capital (own capital and deposits) a community bank needs to provide effective banking services to its clients. However, it is recognized that since many community banks are based in rural areas, their promoters may not be able to effectively raise N50 million as shareholders' funds. Hence, the stipulation of N20 million as shareholders' funds for the unit microfinance banks. The banks are expected to engage in aggressive mobilization of savings from micro-depositors to shore up their operating funds. 2. A State coverage microfinance bank that would operate multiple branches would be expected to take off with funds sufficient to operate a full branch in at least two-thirds of the Local Government Areas in that State. Hence, a minimum paid-up capital of N1.0 billion shall be required to obtain the license to operate a State coverage MFB. Expansion to another State shall be subject to the provision of N1.0 billion minimum shareholders' funds unimpaired by losses, and after opening branches in at least two-thirds of the Local Government Areas of the State it is currently licensed to operate in, and if in the view of the regulatory authorities, it has satisfied all the requirements stipulated in the guidelines. of coverage (district or province), the population and volume of business of the area further determine the level of capitalization. The capitalization requirements in other countries were also considered in arriving at the capitalization levels for the two categories of MFBs in this policy. Aside from the natural resource endowments, Imo state has immense potential for tourism. For example, the top ten attractions include: Oguta Wonder Lake Resort and conference centre, Oguta; The Natural Springs located at Onicha, Ezinihitte Mbaise; Nekede Zoological garden and forestry reserve; The rolling hills at Okigwe; Monkey colonies at Lagwa, Aboh Mbaise, and Omuma and Aji, Oru East LGA; Ezeama mystic spring at Isu Njaba; Ngwu springs at Nkwerre 1. Area of Study The study was carried out in Imo State, Nigeria. Imo State was selected because of proximity, cost and familiarity. The State has three geopolitical zones (Orlu, Owerri, and Okigwe zones). It is also delineated into 27 local government areas. The population of the state is 3,934,899 persons with many subsisting in farming (NBS, 2007). 2. Data Source Both secondary and primary data were used in generating information on the effectiveness of microfinance banks in alleviating poverty as expressed by their customers in Imo State, Nigeria. A questionnaire was designed titled "Questionnaire on the Impact of Microfinance Banks in poverty reduction in Imo State (QIMBPRIS)". Descriptive survey was adopted for the study. According to Adewumi (1981) as reported by Yahaya, Osemene and Abdulraheem (2011) the survey method was chosen because of its inherent advantages over other research methods. 3. Sampling Method A stratified sampling method was used in the selection of the customers that expressed their viewed on the effectiveness microfinance banks in alleviating poverty in Imo state, Nigeria. In order to have unbiased selection of samples, the study area was divided into 16 sample units based on the various local government area in Imo state, Nigeria. The population of the study comprises 40 microfinance Banks (MFBs) in the 3 Senatorial Zones of the study area, which consists of 27 Local Government Areas (LGAs). Four (4) MFBs were purposefully selected from each of the 3 Senatorial Zones, making a total of 12 MFBs. One Hundred and Twenty questionnaires (120) were randomly distributed to customers of these selected microfinance Banks in the three senatorial Zones as follows, namely: Owerri (82), Okigwe (100) and Orlu (200). Orlu has the highest concentration of MFBs therefore having more customers (Orlu, 65 %: Owerri, 25 % & Okigwe, 10 %) see figure 1. This is probably due to the fact that it has the highest number of LGA's (Twelve out of 27) and majority of the MFBs were licensed to Operate as a unit bank (a.k.a. Community Banks). Commercial banks are concentrated more in Okigwe and Owerri than Orlu. However, only eighty (80) questionnaires were properly filled and used for analysis. Respondents were asked to respond to the questions contained in the questionnaire by indicating level of relevance of the implicated variables. Data generated from the survey were analyzed using descriptive and inferential statistics such as percentage, mean, standard deviation, t-test statistics and Analysis of Variance (ANOVA) at 0.05 alpha level. Method of Data Analysis Average monthly income of the respondents was used by IPAR (2007) to proxy poverty. Respondents with income level below $2 per day will proxy rural poor because Sani (2008) argued that extreme poor are those with daily income level of less than one US dollar. This is in line with the millennium declaration popularly known as MDGs. Therefore respondents with income level above $2 per day were coded I and 0 if otherwise. IPAR (2007) further used the level of financial exposure of the respondents on Saving Account. Current Account, Fixed Deposit, Loans, Automated Transaction Machine (ATM)/Credit/Debit Card, Loan, Insurance, Mobile Banking, Internet, Banking, Shares and Pension to proxy access to finance. They coded 0 for individuals who didn't answer the question or did not know the answer. I for individuals that had never used the product, 2 for individuals who used the product before, 3 for individuals who have other members of the household using the product in question and 4 was allocated to individuals who currently have the product. Nevertheless, in this study 0 was coded for the respondents unanswered the question or do not know the answer. I was awarded to individuals who had never used the product. 2 for individuals who used the product before and 3 for individuals who are currently using the product. Logit model was used to analyze the influence of independent variables (financial services) on the dependent variable (level of income). The model is given below: Since the respondents exhibit different categories of income level, we applied multinomial logit model to ascertain how the degree of financial services usage vary among the different income categories of the respondents. The respondents were categorized into four income brackets below N10.000. N10, 001 N15, 000. N15, 001 N20, 000 and N20, 001 and above. Therefore the model is further modified to accommodate different income brackets of the respondents as such: the model is given in turn. RESULTS AND DISCUSSION The result is divided into two parts i.e. descriptive results and inferential results. 104 Volume 16 In Table 1 above, it shows that 298 respondents were male while 86 respondents were female representing 78 % and 22% respectively. The table also shows information regarding the marital status of the respondents and it indicated that majority of the respondents were married, numbering 251, while 125 respondents were single and only 8 were divorced representing 65 International Letters of Social and Humanistic Sciences Vol. 16 %, 33 % and 2 % respectively. Moreover, data pertaining to educational qualifications of the respondents shows that 137 of the respondents do not have any formal education, 67 possess primary school leaving certificate. 81 indicated having secondary school certificate. 71 with diploma / NCE and its equivalent. 28 of them have first degree certificate and above representing 36 %, 17 %, 21 %, 19 % and 7 % respectively. This data coincides with Beck et al, (2006) assertion that finance appears inaccessible because of high rate of illiteracy in rural areas. Table I above, further shows the frequency of the occupational distribution of the respondents. It was observed that 7 respondents (2 %) did not respond to the question, 167 (44 %) were farmers, 90 (23 %) were civil servant, 120 (31 %) indicated business as their occupation. In order to avoid multiplicity of response, respondents that affiliated to more than one occupation were only asked to give their major one. We similarly depict the monthly income brackets of the respondents from the above table. One hundred and eleven (111) respondents (29 %) indicated earning N10,000 N15,000, 95 respondents or 25 % indicated N15,001 -N20,000 as their income bracket, 94 or 24 % were earning above N20,000, while 84 (22 %) indicated earning below N10,000. Table 2 above shows the summary of multinomial logit regression results. It could be discerned from the result that the estimated coefficient of saving is negative but not significant in equation 4 while the estimated coefficient is significant in equation 5 and 6. The result means that high income class has more capacity to save than poor dwelling in rural areas. The finding appears to support the predication of Economics theory of savings which argues that saving is a 106 Volume 16 function of the level of income. On the contrary the estimated coefficients of current account and fixed deposit are positive but statistically insignificant in all models however both have 69 % and 23 % probabilities of reducing poverty. Moreover, the estimated coefficient in equation 4 and 5 are and statistically significant at 1% and 5 % level of significance respectively. The estimated coefficient of loan has the highest probability (98 %) of reducing poverty in rural areas. This finding tends to supports Burgess and Pande (2003). Who asserted that access to formal finance is critical for enabling the poor to transform their production systems and thus exit poverty? Access to finance through credit assists the poor not only to smooth their consumption expenditure but also to build their assets, which enhance their productive capacity (IPAR, 2007). Furthermore the estimated coefficients of ATM and insurance are not statistically significant in all equation but have approximately 79 % and 72 % likelihood of reducing poverty in rural areas respectively. Similarly, the estimated coefficient of microfinance is statistically insignificant in all equations with 84 % probability of reducing poverty in rural areas. The estimated coefficients of mobile banking are positive and significant at 5 % level of significance in equation 3 and 4 respectively. Moreover, the estimated coefficient in equation 5 is not statistically significant. The coefficient has 17 % probability of reducing poverty in rural areas. The overall model is adequate given by significant LR Ch 2 value at 1 % level of significance so also 40 % variations in dependent variable is jointly explained by independent variables as shown by Pseudo R 2 value. Source: Authors Computation The study found that access to formal financial services increases with level of respondents' income in rural areas and also most of the variables that were examined indicated a very high probability of reducing poverty. It could therefore be concluded that enhancing access to formal finance especially credit has a high likelihood of reducing poverty in rural areas. The implication of this study is that the federal government of Nigeria and financial institutions International Letters of Social and Humanistic Sciences Vol. 16 in the country should take up the challenge of establishing bank branches in the rural areas or make formidable arrangement for supplying more credit to the rural dwellers. This study suggests that group lending strategy of Grammen Bank of Bangladesh could be copied since the bank recorded very low default rate. This is based on the premise that the government policy priority is poverty reduction.
2018-10-30T00:19:56.741Z
2013-11-08T00:00:00.000
{ "year": 2013, "sha1": "89d00c9de7c343111fa1925a3d9bbdead636265f", "oa_license": "CCBY", "oa_url": "http://www.mcser.org/journal/index.php/mjss/article/download/2381/2356", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "e7fed9ab1db670a9d57ffb7ff9caa3ce1ddcf82e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Business" ] }
15371316
pes2o/s2orc
v3-fos-license
Synaptosomal-associated protein 25 (Snap-25) gene Polymorphism frequency in fibromyalgia syndrome and relationship with clinical symptoms Background SNAP-25 protein is contributory to plasma membrane and synaptic vesicle fusions that are critical points in neurotransmission. SNAP-25 gene is associated with behavioral symptoms, personality and psychological disorders. In addition, SNAP-25 protein can be related to different neurotransmitter functions due to its association with vesicle membrane transition and fusion. This is important because neurologic, cognitive, and psychologic disorders in fibromyalgia syndrome (FMS) can be related to this function. This relationship may be enlightening for etiopathogenesis of FMS and treatment approaches. We aimed to study a SNAP-25 gene polymorphism, which is related to many psychiatric diseases, and FMS association in this prospective study. Methods We included 71 patients who were diagnosed according to new criteria and 57 matched healthy women in this study. Both groups were evaluated regarding age, height, weight, BMI, education level, marital and occupational status. A new diagnosis of FMS was made from criteria scoring, SF-36, Beck depression scale, and VAS that were applied to the patient group. SNAP-25 gene polymorphism and disease activity score correlations were compared. Results Mean age was 38±5,196 and 38.12±4.939 in patient and control groups, respectively (p=0.542). No significant difference was found between groups regarding age, height, weight, BMI, education level, marital or occupational status (p > 0.05). Ddel T/C genotype was significantly higher in the patient group (p = 0.009). MnlI gene polymorphism did not show a correlation with any score whereas a significant correlation was found between Ddel T/C genotype and Beck depression scale and VAS score (p < 0.05). Conclusion FMS etiopathogenesis is not clearly known. Numerous neurologic, cognitive and psychological disorders were found during studies looking at cause. Our study showed increased SNAP-25 Ddel T/C genotype in FMS patients compared to the control group, which is related to behavioral symptoms, personality and psychological disorders in FMS patients. Background Fibromyalgia syndrome (FMS) is a non-inflammatory rheumatologic disease characterized by widespread musculoskeletal pain, lethargy and tenderness without a definite cause [1]. FMS diagnosis is used for heterogeneous pathological states including anxiety disorders, depression, lethargy, sleep disorders, and gastrointestinal system symptoms with widespread pain [2]. One of the reasons that FMS is thought to be such a heterogeneous disease may be due to concurrent psychological disorders. Indeed, psychiatric symptoms are very common in this syndrome, and they influence the course of the disease [3,4]. FMS patients are reported to have increased depression, anxiety and bipolar disorder comorbidities [3,5]. Several studies determined disorders in personality inventory profiles based on the thought that some personality and mood disorders might predispose to FMS [6][7][8]. Dopamine is an important mediator in both psychopathological incidents and pain conduction. Dopamine D2 receptor sensitivity and density are increased in FMS patients [9]. Also, dopamine D4 receptor gene polymorphism is found to be relevant to FMS personality profile [10]. SNAP-25 protein is contributory to plasma membrane and synaptic vesicle [11]. SNAP-25 forms complexes with synaptobrevin in synaptic vesicles and with syntaxin in the plasma membrane [11]. In simple terms, SNAP-25 protein is critical in neurotransmission for fusion of plasma membrane and synaptic vesicle. Several studies investigated the relationship between SNAP-25 gene polymorphism and personality disorders, schizophrenia, and attention deficit and hyperactivity disorder; these studies reported that the SNAP-25 gene might influence development of these disorders [12][13][14][15][16][17]. Furthermore, SNAP-25 protein might be relevant to other different neurotransmitters due to its involvement in vesicle membrane transition and fusion. This is important because it can be the main reason behind neurological, cognitive, psychological disorders in FMS. If such a relationship exists, it will be enlightening for etiopathogenesis of FMS and treatment approaches. We aimed to evaluate the SNAP-25 gene (MnlI = rs3746544 and DdeI = rs1051312) polymorphism, which is related to many psychiatric diseases, and FMS association in this prospective study. Patients and evaluation We included 71 female patients diagnosed with ACR 2010 fibromyalgia diagnosis criteria and 57 age-matched healthy females in the study. Patients and healthy volunteers were informed about the genetic evaluation and informed consent was taken. Blood (10 cc) was taken into EDTA tubes from both the patient and control groups and stored at −20°C. FMS new diagnosis criteria scoring (ACR 2010), VAS, SF-36, and Beck depression scale were applied to the patient group. SNAP-25 gene polymorphism prevalence and SNAP-25 polymorphism with disease activation association were compared. The Local ethics council of Pamukkale University Clinical Research Ethics Committee approved the study. Patient consent was obtained for this study. Tools A sociodemographic information form was developed by the researchers and includes patient's age, sex, education, socioeconomic status, settlement, marital status and disease period. The Beck depression scale (BDS) was used for determining the risk for depression, levels of depressive symptoms and difference of intensity. It was developed by Beck et al. and adapted to Turkish by Hisli (Hisli 1989). The VAS consists of 3 parts for measuring pain. The scale was adapted to Turkish and used in numerous studies. The SF-36 scale to determine quality of life was subjected to validity and reliability testing for its use in Turkish. Molecular analysis Genetic evaluation was done in our university's Medical Genetics Department. Patient and control genomic DNA was isolated from peripheral blood by a DNA Extraction Kit. The SNAP-25 genes MnlI (rs3746544) and DdeI (rs1051312) polymorphisms are on the 8 th exon (Forward 5′-TTC TCC TCC AAA TGC TGT CG-3′ and Reverse 5′-CCA CCG AGG AGA GAA AAT G-3′ primary series were used to replicate the UTR region). In addition to these primer series, we also used 10X PCR Buffer, 5 μl dNTP mix consisting of 0.2 mM of every nucleotide and Taq polymerase enzyme. PCR reaction conditions were 95 degrees C for 2 minutes of denaturation followed by 95 degrees C for 45 seconds, 58 degrees C for 1 min, 72 degrees C for 2 minutes of 35 cycles and final elongation at 72 degrees C for 7 minutes. 10 U Ddel and 10 U MnlI enzymes were added separately to obtained 261 bp PCR products and this was incubated for 14 hours at 37°C for cutting. 3.5% Ultra pure agorose jelly was prepared for separating fragments after cutting. Later, PCR products were subjected to 40-50 minutes of electrophoresis and fragmented. After electrophoresis, the allele band series expected for Ddel polymorphism: for Tallele: 261 bp cut band, for C allele: 228 bp and 33 bp two separate bands. The allele band series expected for MnlI polymorphism: for T allele: 256 bp and 5 bp two separate bands, for G allele: 210 bp, 46 bp and 5 bp three separate bands. Statistical analysis For statistical analysis SPSS version 20 was used. Descriptive statistics were given as mean, standard deviation and percentage. Out study's confidence interval was 95%. Intergroup significance was evaluated with Chi square test for qualitative data and Mann-Whitney U test for quantitative data. Quantitative data relationships were evaluated with Spearman correlation test. Non parametric variance analysis was done between genetic polymorphisms. p < 0.05 was considered as significant. Results Mean age was 38±5,196 and 38.12±4.939 in patient and control groups, respectively (p=0.542). No significant difference was found between the groups regarding age, height, weight, BMI, education, settlement, marital or occupational status (p > 0.05) ( (Table 2). Dde1 polymorphism variance between groups showed significant difference (p = 0.009). To determine the reason behind this significance, separate match evaluation was applied and TC genotype was discovered to be the reason. Patient and control groups Mnl1 and Dde1 polymorphism variance is given in Table 2. We evaluated fibromyalgia new diagnosis criteria and their subscores, Beck depression scale, visual analogue scale (VAS), short form-36 (SF-36) subparameter scores (Table 3). Non parametric variance analyses between genetic polymorphisms were applied. When evaluated for Mnl1 polymorphism individuals for all three (TT, TG, GG) genotypes, VAS, BDS and SF-36 did not show significant differences (p > 0.05). When evaluated for Dde1 polymorphism individuals for all three (TT, TC, CC) genotypes, no significant difference was found for TT and CC individuals; however, TC genotype individuals were significantly higher for BDS (p = 0.045) and VAS scores (p = 0.033) ( Table 4). This difference was significant after Bonferoni correction between the groups. Discussion The etiopathogenesis of FMS, which is characterized by widespread musculoskeletal pain and tenderness, is not clearly understood. FMS, chronic lethargy syndrome, irritable intestine syndrome, tension type headache, and myofacial pain syndrome are parts of central sensitization syndromes. Every one of these syndromes overlaps with FMS. One common feature of these syndromes is increased central neuron stimulation through various synaptic and neurotransmitter/neurochemical activities without structural pathology. This, however, is characterized by increased sensitivity to pressure and touch. Many factors contribute to central sensitization syndrome. These include pain, lethargy, sleep disorder, sensitivity to several stimuli, and psychosocial problems. For the past several years, environmental as well as genetic conditions are blamed for FMS and related functional somatic disorders such as irritable intestine syndrome, migraine, and chronic lethargy syndrome [18][19][20]. Several studies tried to explain the etiopathogenesis of fibromyalgia syndrome. These studies showed that serum and central nervous system serotonin and its metabolite levels are low, and serotonin transport velocity in cerebrospinal fluid is slow. For this reason, the serotonin transport gene region has been brought to attention. Offenbaecher et al. reported that the S/S genotype is increased in FMS patients when compared to healthy subjects [21]. Dopamine is another molecule that is generating research interest, which has a role in both psychopathologic incidents and pain transport. Studies determined both dopamine levels and dopamine receptor gene polymorphisms and FMS [9,10,22,23]. It is known that psychiatric disorders, especially depression and anxiety disorder are quite common in FMS patients [3]. Several studies reported depression prevalence in FMS between 20% and 80% [3,4,7]. It is reported that FMS patients have difficulty in both understanding their own feelings and coping with emotions. Several studies determined disorders in personality inventory profiles based on the thought that some personality and mood disorders might predispose to FMS. When the personality inventory of patients was evaluated, hypochondriasis, hysteria, paranoia, and depression scales were found to be higher than control groups [6][7][8]. Studies indicated that low ability to cope with everyday problems can trigger FMS or can increase symptom intensity [6,8]. As a result, some psychiatric disorders are more prevalent in FMS, and some personality and mood features can predispose to FMS development. Nevertheless, the relationship between these situations and FMS is not clearly known. Psychiatric disorders are increased in FMS and are relevant to symptom intensity [4,7]. Some personality and mood features can predispose to FMS, in addition to these features revealing themselves as a result of stress for coping with pain [24,25]. There can be common pathophysiological features between FMS and psychiatric disorders and personalities. Dopamine and serotonin mediated neurotransmitter transport can be among these common biological factors [23]. However, these disorders have some common risks such as exposure to difficult experience. These factors cause stress in the patient, and prolonged stress leads to cytokine secretion. Increased cytokines precipitate both psychiatric diseases and increased pain perception [26,27]. It is impossible to explain only one cause for FMS and the etiopathogenesis of these disorders. In addition, the relationship between concurrent disorders and FMS is not clearly known. SNARE (soluble N-ethylmaleimide-sensitive factor activating protein receptor) proteins have a role in fusion between organelles, and organelles with plasma membrane in eukaryotic cells. SNAP-25 is a SNARE protein found in the plasma membrane. These proteins have important roles in electrical conduction in nerve cells. Nerve transporters are found in synaptic vesicles and sent by exocytosis to the other synapse. For this reason, SNAP-25 affects dopamine and other neurotransmitter secretions, and thus adds to FMS pathogenesis. Therefore, SNARE like proteins should be studied carefully, leading to better understanding of etiopathogenesis and treatment approaches. Also, the relationship between FMS and concurrent psychiatric disorders and personality features can be explained. For this purpose, we found increased SNAP-25 Dde1 TC gene polymorphism in FMS patients in this prospective study. In individuals with the TC gene genotype, BDS and VAS scores were found to be significantly higher than in individuals without TC genotype. FMS is one of the central sensitization syndromes. One mutual property of these diseases is that neurons have increased stimulus mediated by several synaptic and neurotransmitter/neurochemical activities without structural pathology. This results in increased reaction to basic stimuli. Patients who have the TC genotype have increased BSD and VAS scores in our study, which can result from different affected neurotransmitter functions both from this polymorphism and in pain transport. There are no studies done in this matter, and our results should be compared with similar studies. The primary limitation of this study is trying to explain a complex disease such as FMS with only one mutation with a small number of patients. With more patients and other polymorphisms related to the disease, it will help us understand FMS better in the future. Conclusions FMS etiopathogenesis is not clearly known. Numerous neurologic, cognitive and psychological disorders were found during studies attempting to identify cause. Our study showed increased SNAP-25 Ddel T/C genotype in patients compared to controls, which is related with behavioral symptoms, personality and psychological disorders in FMS patients.
2016-05-31T19:58:12.500Z
2014-05-31T00:00:00.000
{ "year": 2014, "sha1": "71230172d838a66afa1758e6f969c2c5f39f1fed", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-15-191", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea69bae4503a95b1265ff953690d13f745e38b78", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249348615
pes2o/s2orc
v3-fos-license
Serial killing in schizophrenia Abstract Serial criminality, although rare, has always aroused the interest of researchers in criminology, psychiatry, psychology, and sociology. We report the case of a patient, suffering from a chronic psychotic disorder, having committed several murders over a period of 9 years, underpinned by a delirium of misidentification of Frigoli syndrome. | INTRODUCTION Serial criminality, although rare, has always aroused the interest of researchers in criminology, psychiatry, psychology, and sociology. Arousing fear, revolt, incomprehension but also fascination and interest, serial killers continue to make headlines and inspire the most appalling of successful "thrillers." Reading the clinical stories of the murderers often reveals absurd, horrific, monstrous, and incomprehensible homicides. They are often out of proportion to the avowed causes, or even without apparent motive. Although re-offending murderers have always existed, individualizing the concept of the serial killer and developing a ubiquitous definition still poses challenges. The typology of serial killers is vast and complex. Several classifications exist with distinctive criteria often belonging to different fields. One of the most common classifications that are widely used by the Federal Bureau of Investigation is the one proposed in 1980 by Hazelwood and Douglas 1 and which differentiates between organized killers and disorganized. "The organized killer is an egocentric, amoral and manipulative individual who methodically commits his murderous crimes," versus "the disorganized killer is a lonely individual, experiencing feelings of rejection, committing murderous acts out of opportunity." Bénézech 2 presents a dichotomy of serial killers in relation to rational and psychopathological motivations. He differentiates schematically between the psychopathic murderer and the psychotic murderer, yet this taxonomy is only indicative of the fact that some criminals fall in between. We report, in this work, the case of a patient, suffering from a chronic psychotic disorder, having committed several murders over a period of 9 years, underpinned by a delirium of misidentification of Frigoli syndrome. | OBSERVATION A patient was involuntarily hospitalized, at the age of 34, in the forensic psychiatry department at the hospital Razi, Tunisia, following dismissal for dementia after having committed eleven murders, over a period of 9 years. He came from a non-consanguineous marriage. His family lived with his maternal grandparents when he was born and their socio-economic conditions were precarious. He was the second child of 4 boys and 2 girls. He left school before completing the first year of primary school. He started working at the age of 10 and has done several small jobs with professional experience marked by instability. His parents were divorced and his father remarried and had another son. The patient is single and has not done military service. The patient was brought up by his maternal grandparents who spoiled him. They were complacent and permissive to him. The family, of rural origin, moved to the city for economic reasons. The patient was 6 years old at the time. The moving continued as a result of the father's unstable work. Instability and violence reigned within the patient's family. The older brother was placed in an adoption center from an early age and the patient himself was violent with all the members of his family. He described his mother as being ambivalent. She was protective, fusional, and violent at the same time. His father had a criminal record and was an alcoholic. The patient described him as being extremely violent, even sadistic. The patient reported violence from his older brother, his mother, and especially his father. Until pre-adolescence, the patient slept in the same bed with his two parents. After the separation of the latter, the patient reports having continued to sleep on his father's side and says he noticed that the latter was hiding knives under the bed. The patient was a very unstable, aggressive child. During his childhood, there was the notion of cruelty to animals, then, during adolescence, the notion of eroticthemed fantasies with women, men, and adolescents. He was allegedly sexually abused as a teenager. His attacker was in his 50s and was reportedly sentenced to 5 years in prison. When he was an adolescent, he went almost daily to the cinema to watch erotic films as well as horror films. Social relations were very disturbed and generally poor, with an inability to form lasting relationships, or reduced to aggressive and purely utilitarian behaviors. He assaulted all members of his family except the father, repeating each time that his father scared him very much, and that he had a terrifying and scary image of him. His female relationships were almost non-existent. He did not have any addictive behaviors, according to him. Before his arrest for a series of murders, he was jailed twice: the first at the age of 14 in a correctional prison for a year following an act of physical violence and the second age 20 years for 8 months for attempted rape. The patient was hospitalized in a general psychiatry department a year before his murderous act, of running away and aggression. He had, among other things, tried to put out the eye of one of his brothers and had threatened to cut his mother's throat. The diagnosis retained was that of paranoid schizophrenia. The patient escaped from the hospital after 1 week. He was arrested 10 years later for a series of murders for which he was deemed irresponsible, hence his admission to the forensic psychiatry department. Regarding his criminal history, he had committed 11 murders and one attempted homicide. His first crime was at the age of 25 and the murderous activity continued over a period of 9 years. The last crime was committed during the day in front of passers-by, and was followed by an attempted homicide of the police officer who was chasing the patient to arrest him. He had reported killing older men. The crimes were committed according to a stereotypical modus operandi. The weapon used was a weapon of opportunity represented by a stone with which he was beating down on the victim's heads. The murder was accompanied by an act of post-mortem emasculation on its victims by means of a knife. Indeed, he always kept this weapon on him, which gave him a feeling of power as he mentioned with a big smile. He killed his victims almost always at night or at dawn, in cemeteries or near mosques. After the emasculation of the victim, he placed the sex organs of the victims in their mouths. There was no concealment of the body, of the weapon used, or of the evidence of the crime. When the patient spoke of his crimes, anxiety was associated with it, while at the evocation of the acts of emasculation a smile appeared on his face evoking pleasure. The motive for the crimes reported by the patient was revenge; he is convinced that his victims are in fact the aggressor who allegedly abused him as a child. In the evening, he saw them again in his hallucinations making indecent proposals to him, a fury of destruction then fell on them and he attacked them the next day in their sleep and killed them. These are tramped men living in the streets. In his visual hallucinations, he also saw naked men and women, fire, crushed heads, blood, a mixture of erotic scenes, and scenes of extreme violence and horror. He also had imperative auditory hallucinations ordering him to kill and put out his victim's eye, intrapsychic hallucinations, Quranic verses, and tales of the president were coming out of his chest, head, and stomach. Thus, the psychiatric examination revealed a delusional paranoid syndrome, a Frigoli syndrome, a mental automatism syndrome, visual hallucinations, auditory and intra-psychic, and disorganization. The diagnosis retained was paranoid schizophrenia. The projective tests brought out in addition to the psychotic structure, the perverse dimension through the inability of the subject to consider an enjoyment without sadism or sexuality without aggressiveness. The physical examination and the paraclinical explorations did not show any organic disorder. | DISCUSSION The reported case corresponds well to a serial killer type repeating offender, having committed more than three homicides, in different places, over a period spanning many years, with no apparent motive, but often an identical criminal scenario over one particular profile of victims. All homicides committed are supported by delusional identification activity of the Fregoli illusion type. The delusional identification syndromes (DIS) are complex psychotic phenomena that may be present in several neurological disorders essentially neurodegenerative or psychiatric such as schizophrenia, delusional disorders, bipolar disorders, and schizoaffective disorders. 3 In the context of DIS, the subject identifies people, places, objects, or events poorly or doubly. 4,5 The common theme of these syndromes being the exact resemblance to the other, the lookalike or the double. 3 DISs include the syndromes of capgras, fregoli, selfidentification, disorientation of places, subjective doubles, intermetamorphosis, and duplicative paramnesia. DISs are also considered as specific factors of passage to the aggressive act and violence. Indeed, they are characterized by hostility toward poorly identified objects, which would entail a significant danger toward others. 6,7 The prevalence of DIS is estimated at 3% in the general psychiatric population. 8 However, some authors believe that these are under-diagnosed disorders. This is due to the absence of systematic clinical screening as well as the absence of reliable and standardized diagnostic criteria. 3,9 Fregoli syndrome is defined by the delusional belief that one or more familiar people, usually persecutors of the subject, change their appearance repeatedly (ie the same person performs many different disguises). The Fregoli illusion can also involve animals, inanimate objects, or even places. 10 In our case, we consider that the Fregoli syndrome was the predisposing factor of passage to the aggressive act. Little literature has examined the relationship between violence and Fregoli syndrome. These are usually case studies or retrospective descriptive studies with small populations. Some studies have classified Fregoli syndrome as a specific risk factor for violence against the misidentified person. 11 Indeed, under the effect of the delusional activity, the affected subjects look at the misidentified subject with suspicion and hostility, which can contribute to growing ideas of persecution as well as an aggressive attitude within the framework of preventive self-defense. 12 Physical assault can progress to homicide. 13 Other studies took a more global view and took into account other confounding risk factors for a violent act out. Within the framework of these studies, the authors put the question: Are the subjects suffering from Fregoli syndrome within the framework of any confused psychiatric pathology more violent than the subjects not affected by these delusions? 3 It has been shown that, apart from delusional misidentification, there are other risk factors for an aggressive act, namely: a history of physical violence, 14 anger, targets of violence being close relatives, and attachment figures being significantly higher, 15 the masculine gender (in 70% of cases), 13 delusional themes of erotomania 16,17 or jealousy, 18 substance abuse, 19,20 as well as impulsivity and dissociation. 11 Statistically, schizophrenia is the first etiology in which occurs Fregoli syndrome, 21,22 and which is the case of our patient. The literature has proposed three explanatory models of Fregoli syndrome (neurobiological, psychoanalytic, and cognitive). Neurobiologically, it is believed that Fregoli syndrome is associated with some degree of objective impairment of facial recognition. It is caused by a fault in the identification process leading to the inability to assign an identity to a specific person. It is a hyperactivity of the cerebral cortex, in particular of the right hemisphere, which can explain the hyper-familiarity in Fregoli syndrome. 23 The psycho-analytical explanatory model of Fregoli syndrome is based on the splitting of parental figures into good and bad objects according to the theory of Mélanie Klein. 24 In addition, this syndrome is defined by the meeting of three instances being: the "sick" subject under the effect of delusional ideas, the alter (the third person well known to the patient), and the alias (the disguised impostor, present in a significant way in delirium). 10 The cognitive explanatory model proposed that nonrecognition in the context of Fregoli syndrome leads to the absence of a feeling of familiarity given the impossibility of successively integrating a person's memories accompanied by episodic experiences, thus generating delusional duplicates according to the subject's needs and motivations. 25 Currently, there are no specific recommendations for the treatment of DIS. A review of the literature on 84 clinical cases of DIS concluded that antipsychotics have been widely used, primarily: olanzapine, risperidone, aripiprazole, quetiapine, haloperiodol, and clozapine. 26 The efficiency of antipsychotics remains controversial given the absence of randomized studies. Some studies have often noted the persistence of delusions despite antipsychotics. 27 Hypnosis has also proved its effectiveness among half of the subjects followed for Fregoli syndrome. 28 Some authors believe that hypnosis is used to recreate bad mirror identification, which would make it possible to keep delusional ideas at bay. 29 Finally, electroconvulsive therapy constitutes a particularly effective therapeutic alternative in Fregoli syndrome, mainly in the context of postpartum mood disorder. 30 The literature does not provide enough data concerning DIS in general and Fregoli syndrome in particular. This being explained by the absence of standardized diagnostic criteria, therefore a limited number of works on the subject 31 and variability of definitions from one publication to another; which complicates the generalization of the results. 27 | CONCLUSION This illustration of the serial killer emphasizes the dangerousness of delusional identification syndrome. Early identification and treatment are primordial to prevent violent behaviour.
2022-06-05T15:18:22.989Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "aa1b325268f99f9d0412d7c997d2450b6f46aa9b", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d35f0dba1ccbc3ed7d6d947c98b1d70f1af9aa4d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236560476
pes2o/s2orc
v3-fos-license
“Be Soft”: Irony, Postfeminism, and Masculine Positions in Swedish Sport Betting Commercials Gambling advertising usually draws heavily on gendered stereotypes, including portrayals of male gamblers as tough and successful. Meanwhile, representations of men in advertising have grown increasingly diverse, with emotional and sexualized men accompanying heroic, muscular portrayals. In this article, both these bodies of research are drawn upon to discuss a series of Swedish sports betting commercials which encourage the viewer to “bet hard” while also “being soft.” The celebration of “softness” is ambiguous but can be seen as referencing gendered, political discussions about men and masculinity. Engaging with hybrid masculinities theory, postfeminism, and discourses about gambling and betting, the article demonstrates that meanings around “softness” are ambiguous, ironic, and serve to normalize gambling by distancing it from discourses about addiction. The commercials represent a shift in gambling advertising, but the linking of men’s politics to gambling also represents a new complexity in narratives about “new,” or “soft,” men. Introduction A man, dressed in army pants, smudged with soot, crawls along the floor in a darkened room full of hanging chains, oil drums, and fires, eventually pointing a shoulder-mounted gun at a giant lion-shaped statue. The statue, exploded, turns out to have enclosed football player Zlatan Ibrahimović, and Ibrahimović and the shot, actor Dragomir Mrsic, start roaring at each other, emulating the lion. As the music intensifies, Ibrahimović looks increasingly apprehensive. "Feels a bit . . . cheesy? Come on, screw this," he says, and the two men are transferred to a tea parlor. Looking around and contentedly commenting on how "soft" it is, they sit back, pour each other tea, and offer each other cookies. At last, a logo, a closed fist, and a slogan, "Bethard. Be Soft," appear. These scenes are part of a series of Swedish sports betting commercials that make up the campaign Bethard. Be Soft, launched by Swedish sports betting company Bethard (Bethard 2018a(Bethard , 2018b(Bethard , 2018c. In this article, I scrutinize discourses around gambling, men, and masculine positions in these commercials in order to discuss developments in consumerist culture, gendered and racial politics, and gambling politics in Sweden and beyond. This includes the imperative to be soft and celebrate softness, evident in the aforementioned scenes, which I argue should be interpreted in relation to gender politics but also as a way of rehabilitating betting and gambling. Research on gambling commercials and gender indicates that stereotypical portrayals of masculinity are common (Deans et al. 2016;Jouhki 2017;Lopez-Gonzalez et al. 2018;Sen and Lou 2016). The assertion that the roaring, oil drums, and fires, all connected to hypermasculine action-movie performances, are "cheesy" makes this commercial stand out, signaling, I suggest, a shift in gambling commercials and a connection to postfeminist representations of men. Portrayals of Men: Irony and Power Over the last 60 years, cultural representations of men have become increasingly varied. Heroic, unemotional, and "laddish" portrayals now coexist with caring, emotionally mature, and sexualized ones (Beynon 2002). Media scholars have debated the meanings of these developments, discussing whether softer, more emotional portrayals indicate a substantial shift in gendered power relations, and how to interpret the irony often present in these portrayals. Simply put, are they a way of giving the "same old" gender relations a new and more acceptable surface or is this too pessimist an interpretation? Is the irony a way of escaping critique, or is it an effect of increasingly ambivalent meaning-makings, characteristic of late modernity (Benwell 2003;Beynon 2002;Gill 2007)? Similar developments can be discerned in advertising. While often portrayed as actors, subjects, strong, and in control, men are addressed in new ways in contemporary consumerist culture (Frank 2014;Gee 2014;Scheibling and LaFrance 2019). Indeed, as Susan M. Alexander (2003, 536) argues, while corporations may have an interest in maintaining traditional gender roles to ensure continued consumption of their products, "they also serve as agents of social change by creating new consumer markets," as epitomized by sexualized male bodies, addressed as in need of fashion and hair-removal products. This tension is captured by Sarah Gee (2014), who discusses widely varying portrayals of David Beckham as a successful football star, heterosexual family man, and metrosexual icon advertising perfume and underwear. Gee interprets this combination of feminine and gay aesthetics with more traditional ones as "flexible masculinity," and emphasizes that Beckham's muscular, male body together with a firm image as heterosexual and successful footballer form a kind of capital, which enables his position to be interpreted as flexible rather than effeminate or nonnormative. Another aspect of this is addressing men and masculine positions with irony and humor (Barber and Bridges 2017;Messner and de Oca 2005;Schroeder and Zwick 2004). Michael A. Messner and Jeffrey Montez de Oca (2005) show that men are humorously portrayed as unheroic losers in alcohol advertising, but the irony ultimately becomes a way of portraying alcohol and "buddies" as young, White men's only comfort while also depicting women in sexist ways. Kristen Barber and Tristan Bridges (2017) argue that the constructed nature of masculinity itself is being humorously exposed in the advertisements featuring ironic portrayals of hypermasculine men with feminine-coded products that they study. They suggest that this is an example of "hybrid masculinities," which, according to Bridges and C. J. Pascoe (2014, 246), constitute a "selective incorporation of elements of identity typically associated with various marginalized and subordinated masculinities and-at times-femininities into privileged men's gender performances and identities." "Hybrid" and "flexible" masculinities indicate changes in portrayals of men, and in men as subjects of and objects to consumerist, capitalist imageries. However, both perspectives tend to end in a negative answer to the question posed earlier, which applies also to the researchers focusing on irony: The seemingly new portrayals of men may indicate changes in consumerist culture, but gendered power relations are changed in style, not substance (see also Messner 2007). Bethan Benwell (2003) is slightly more open-ended in her discussion about UK men's lifestyle magazines, where she argues that the magazines oscillate between promoting a heroic and an antiheroic position, using irony and humor directed at heroes, antiheroes, readers, and writers alike. Such multifaceted ironies are present also in the Bethard commercials along with feminist, or at least feminist-inspired, discourses. Especially the celebration of softness must, I suggest, be understood as an example of postfeminism. Postfeminist men as well as postfeminism more generally has been discussed by Rosalind Gill (2007;2016) and others (e.g., Butler 2013;Dow 2006;Hansen-Miller and Gill 2011;O'Neill 2015). These authors argue that contemporary portrayals of men, including advertising, refer to feminism by simultaneously celebrating, undermining, and commodifying it. For instance, the "lad lit" and "lad flicks" discussed by David Hansen-Miller and Gill (2011) and Gill (2014) contain knowing references to feminist critique as well as humorous objectification of women, a combination which the authors suggest makes the objectification difficult to criticize. In these representations, men and masculine positions are portrayed in what Benwell would probably call an antiheroic fashion; they are clumsy, lost, professional and personal failures, surrounded by women who "have it together" (Gill 2014;Hansen-Miller and Gill 2011). Rachel O'Neill argues that "postfeminism currently represents an acute endeavor for critical masculinity scholarship, precisely because postfeminism effects the erasure of sexual politics" (2015, 115), a statement with which Gill (2014, 200) would probably agree: "Far from mocking or unmasking male power the presentation of ineptitude and confusion seems strategically designed to maintain it, while simultaneously effacing it and claiming that men are the disadvantaged losers in the 'new' gender stakes." The research about irony cited earlier does not mention postfeminism, but there are important parallels: Ironic commercials can be interpreted as postfeminist humor, explicitly and implicitly making fun of normative masculine positions, thus taking up and using feminist critique as a commercial strategy. Postfeminist representations of men in Sweden, often said to be one of the most gender-equal countries of the world, is a small field (Björklund 2018;Goedecke 2020), and Swedish research about representations of men in advertising is also scarce. Masculine positions associated with emotionality, present fatherhood, and gender equality are celebrated (although seldom realized) goals in Sweden (Gottzén and Jonsson 2012;Klinth 2002) but how such masculine positions are produced, portrayed, celebrated, or critiqued in cultural representations needs further study. In this article, I contribute to these fields of research while also furthering the debate about postfeminism and changes in portrayals of men that has been conducted primarily in US and UK contexts. Gambling Advertising and Gender In the 1980s, a process of commercializing gambling took place in Sweden (Svensson 2013, 6). Previously subject to a state monopoly, in 2019 the Swedish gambling market was opened to commercial, licensed, companies. In the years leading up to this, gambling companies registered outside of Sweden had been advertising themselves in media broadcasted from abroad (the commercials studied here are examples of this), which resulted in heated debate about "aggressive" advertising and a promise from the minister of civil affairs, Ardalan Shekarabi, that the issue would be subject to a public inquiry (SVT 2019). Gambling research has a history of gender-blindness (Mark and Lesieur 1992), but more recent quantitative studies have shown that gambling is gendered: Men gamble more than women, spend more money gambling, and prefer strategic games such as sports or horse betting and poker to chance games, in Sweden and elsewhere (Svensson 2013, 11f, 18). Gambling has often been studied within medical, psychological, or public health research (Cassidy et al. 2013;McGowan 2004), and is increasingly understood in terms of addiction, that is, within a medical rather than, for instance, a religious discourse, positing gambling as sinful (Edman and Berndt 2016;Walker 1996). I suggest that this shift demonstrates the need for discursive tools when scrutinizing contemporary meanings around gambling (see also Cassidy et al. 2013;McGowan 2004). In this research, quantitative methods have dominated and gender has often been approached as a static variable. Criticizing this, UK researcher Rebecca Cassidy (2014) argues that gambling and betting take place in gendered arenas, such as betting shops, and are practices where masculine positions may be produced through, for instance, foregrounding mathematics, logics, control, and knowledge about sports or horses rather than luck or chance. This connects to themes central to Western conceptions of masculinity (Lloyd 1993). Following this, it is reasonable to suggest that different forms of gambling have different gendered connotations, even if this needs further scrutiny by gender scholars. While most forms of gambling are arguably connected to risktaking, sports betting and horse racing are connected to knowledgeability, rationality, and control (Hansson 2004), sports possibly adding an element of masculine embodiment and toughness (Whannel 1999). Contrastingly, poker and casino games are associated with toughness, glamour (Jouhki 2017) and, variously, skill and control. Swedish research about gender and gambling is still in its infancy (Svensson 2013 is an exception). Qualitative research and research on gambling advertising are similarly scarce and apart from a brief discussion by Philip Lalander (2006), gender is not discussed (ethnographies include Binde 2011;Hansson 2004; gambling advertising is studied by Kroon 2019 and gambling politics by, e.g., Alexius 2017). Internationally, a small body of research discusses gambling advertising and gender. Researchers from various parts of the world (Australia, Macau, Finland, Spain, and the UK) point to the predominance of men among those addressed as gamblers (and winners) and to stereotypical and sexualized portrayals of women and show that gambling is often framed as an activity undertaken among male friends (Deans et al. 2016;Jouhki 2017;Lopez-Gonzalez et al. 2018;Sen and Lou 2016). For instance, Jukka Jouhki (2017) studies Finnish poker advertisements and shows that they contain "images traditionally associated with masculinity: seriousness, gravity and power or aggression." He points to dark colors and muscular, tough, athletic, or glamorous men, sometimes surrounded by glamorous, admiring women. Apart from Jouhki, this research is predominantly conducted and published within public health or addiction studies, and theoretical and methodological tools from gender studies and cultural studies, which I suggest could greatly deepen the analyses, are often absent. This article contributes to gambling advertising research by putting it in dialogue with these fields, including Critical studies of men and masculinities. Returning to the discussion about changed representations of men in contemporary consumer culture, Jouhki suggests that this has not affected poker advertisements: "the hegemonic masculinity which is nowadays more flexible and contested in ads than ever . . . is rather stable, if not stereotypical in poker . . . the gaming men are rock solid and operate in the 'masculine mode of exigency and competition'" (Jouhki 2017, 196). Moreover, Cassidy argues that while masculine dominance has been critiqued in sports, entertainment, business, and politics, betting has been dismissed as "trivial, morally ambiguous and inconsequential." As a result of this lack of critical attention, betting shops have been "able to provide a 'haven' for the performance of a particular kind of masculinity" (Cassidy 2014, 187). As is evident already in the opening quote, my material complicates this image by drawing on other discourses, albeit in ironic or ambivalent ways. This article introduces the theme of changing gender politics in gambling advertising, while also deepening the discussion about masculine positions and humor in this field. Methodology and Material I studied three commercials (Bethard 2018a;2018b;2018c), all produced by Swedish gambling company Bethard, registered on the island of Malta, featuring football player Zlatan Ibrahimović and actor Dragomir Mrsic. Felix Herngren directed the commercials and were aired online as well as on various Swedish television channels in 2018. The three commercials were launched as part one, two, and three on YouTube, and can thus be seen as one commercial consisting of three "episodes." However, they can also be seen independently or in varying orders. Having previously been the public face of Bethard, Mrsic was joined by Ibrahimović in March 2018, who became an owner of the company around the same time. This instance of celebrity endorsement can be seen as an attempt of using meanings and status associated with Ibrahimović to promote the company but also change its image (Awasthi and Choraria 2015, 216). This is confirmed by owner Erik Skarp in an interview: "It will be softer and more comical, we take a soft but big step away from the hard stuff you have seen before . . . . We will now try to communicate modern masculine attitudes . . . of course, this lies close to our hearts, as it is based on equality" (betting.se 2018, my translation). The Bethard commercials were not met with debate or negative attention. Instead, Ibrahimović's investment in Bethard was discussed extensively in Swedish media, and his promotional role continued; in addition to several very short, impromptu-style videos of Ibrahimović and Mrsic, Ibrahimović continued to present documentary and journalistic content on the Bethard YouTube channel. I followed researchers who use limited ranges of materials in order to make broader points about cultural representations, subject positions, and gender politics (e.g., Gee 2014; Messner and de Oca 2005;Schroeder and Zwick 2004). I suggest that the Bethard commercials illustrate several tendencies important to gender politics, gambling politics, and gambling advertising. They resemble ironic, distanced portrayals of hypermasculine men while also drawing on postfeminist themes, rendering them symbolic of larger changes in gender politics and consumer culture in Sweden and in the western world. I watched the material together and separately, in varying orders. The material consists of visual, audial, and linguistic messages, all of which were seen as contributing to the meaning produced within it. After having performed a denotative reading, I went on to study the connotations of the commercials; themes present within the material were scrutinized and put in dialogue with previous literature and then re-evaluated, leading to the construction of more relevant themes, in a circular process (Gill 2007;Hall 1997). Inspired by previous research (Gee 2014;Gill 2007;Williamson 1978), special attention was directed at the portrayals of the men, their bodies and dress, the settings in which they were portrayed, their interaction, both verbal and nonverbal (with each other, other men, and with women), as well as direct and indirect references to betting and gambling. Tools from semiotic and discourse analysis have been used to understand the material (Gill 2007;Hall 1997; see also Williamson 1978). The commodity or brand "create[s] structures of meaning" which are "translated into statements about who we are and who we aspire to become" with the advertisement urging us to become that which we are addressed as (Gill 2007, 50). Importantly, we may respond in various ways, even resist the intended meanings and positions offered by the advertisement (Hall 1997). Similar to Benwell (2003, 153ff), I therefore acknowledge several possible readings to give space to the ambivalence within the text. I used poststructuralist perspectives (Butler 1990;Whitehead 2002) in my approach to gender, men, and masculine positions, and see them as co-constructed with, for instance, raced and classed positions (Crenshaw 1991;Staunaes and Søndergaard 2011). Stephen Whitehead suggests that the notion of "man," together with "the male body," can be understood as "the central, possibly most stable, reference point for the masculine subject as it seeks to create and realize its own existence" (2002,212). Thus, while by no means necessary in order to produce masculine positions, (normative) male bodies are strong signifiers, stabilizing, enabling, and restricting the gendered positions that are produced, without completely controlling them, leaving room for variations, nuances, tensions, and protests. Analysis I start this section with a description of the commercials, inspired by the denotative reading that was performed at the initial stage of the analysis. I then go on to discuss ironies, postfeminist themes, and gendered politics of gambling in the material. As described at the outset of the article, the first commercial (Bethard 2018a) portrays Mrsic and Ibrahimović as half-naked, sweaty, and tattooed in a dark, large room, firing guns and roaring, action-movie style, and then at a tea parlor, dressed in brightly colored cozy sweaters and pants. The former part is labelled "cheesy" (lökig), the latter "soft." As Judith Williamson (1978) notes, meaning is produced in the relationship between various signs, and here, the interplay between the two venues, the clothing and music, how the men relate to each other (roaring versus talking), along with the slogan ("Bethard. Be Soft") produces a distinction between "hardness" and "softness." This interplay recurs in the second commercial (Bethard 2018b), where Ibrahimović and Mrsic sit in a tattoo parlor. While Mrsic is being tattooed on his back, Ibrahimović and Mrsic discuss whether Ibrahimović is going to play for Sweden in the world championship of 2018 (he did not) and the confusing difference between odds and probability (high odds equaling low probability). The dark room and its attributes signal hardness, but when Mrsic reveals his back tattoo it is shown to be an image of dolphins hovering over a turquoise ocean, surrounded by red hearts. "Insanely soft" Mrsic remarks, as Ibrahimović noddingly admires his tattoo. Contrastingly, the third commercial is set on a light, sunny beach. Mrsic is jogging and runs into Ibrahimović, who is taking a small child for a walk in a stroller. "Oh my God, how cute!" a sweaty Mrsic exclaims, taken by the cuteness of the child. As he bends down, his face is filmed from within the stroller as it were, filling the screen. Ibrahimović explains that it is his neighbor's child and that taking it for a walk is "soft" and equals "life quality." Mrsic teasingly suggests that the child looks similar to Ibrahimović, implying that it might in fact be his child, then jogs away. Ibrahimović continues walking with the stroller and as the logo fills the screen, he says: "you got a compliment there. Not bad." While the dialogue is in Swedish, the slogan is in English and it is the English word "soft" that is used. 1 While it retains some of the meanings from English, as a loanword it has also taken on new meanings: it means good, nice, laid-back, or taking it easy (Ordguru 2019; Slangopedia 2019). 2 I suggest that "soft" has multiple and shifting meanings throughout the commercials, which will be discussed subsequently. Hard and Soft: Ambivalent Ironies While meanings of hardness and softness are ambivalent, they are obviously connected to gender. Hardness is an integral part of western, normative constructions of masculinity, built on distancing from femininity, being a "big wheel," a "sturdy oak," and aggressive and risk-taking (Brannon 1976, quoted in Kimmel 1997. Meanwhile, "soft" can refer to insufficient masculinity (see, e.g., Bly 1990, 2ff), similar to the feminizing terms in R.W. Connell's (2005, 79) discussion of how certain ways of being a man are excluded from what she calls hegemonic masculinity. The gendered meanings are by no means unintended; indeed, as mentioned above, the Be Soft campaign represents an attempt to "communicate modern masculine attitudes" (betting.se 2018). Additionally, the two venues displayed in the first commercial are clearly, even stereotypically gendered: While the darkened, industrial room is clearly connected to action-movie, exaggerated masculinity, the tearoom, filled with flowered teapots, textiles, and women among the other guests, is clearly connected with femininity. Representations of men have changed in many contexts, but Jouhki (2017) suggests that "rock solid" manhood holds its ground in gambling advertising. In stark contrast to this, hardness is dismissed as cheesy, a ridiculous charade, in the Bethard commercials. Compared with Benwell's (2003) ambiguous irony, simultaneously poking fun at "heroic" and "unheroic" masculinities, magazine writers, readers, and celebrities, the Bethard commercials seem less subtle; it is the hypermasculine action-movie setting, the fires, chains, roaring, guns, and the sooty, bare chests, and military-style painted faces that should not be taken seriously. Correspondingly, the imperative to Be Soft indicates that this is the position from which these stereotypical aspects are mocked, and that softness should be taken seriously. The ridiculing of hardness is familiar from other contemporary advertising, which "simultaneously celebrate and mock" "intentionally excessive displays of masculinity" (Barber and Bridges 2017, 43). Barber and Bridges suggest that such commercials create distance between White, young men and "hegemonic masculinity" by associating the men with practices or attributes from othered groups, while at the same time reinforcing existing gendered power relations (see also Bridges and Pascoe 2014). Additionally, while such "hybrid masculinities" may entail a sexualizing of men, this is seldom done in disempowering ways (see also Gee 2014). The Bethard commercials have much in common with this: The privileging of softness can be seen as a way of distancing Ibrahimović and Mrsic (as well as the Bethard brand) from stereotypical masculine hardness, and while their bodies are on display, the nakedness is associated with hardness, not sexuality, and then rejected as parodic. Humor and hybridity are discussed also by Messner, who points to the importance of "the kick-ass muscular heroic male body" (Messner 2007, 469) in US politics and culture. Such bodies are often combined with "situationally expressive moments of empathy, grounded in care for kids" (2007, 469), which however "tends to facilitate and legitimize privileged men's wielding of power over others" (2007,477). In the words of the Bethard commercials, Messner argues that while situationally appropriate aspects of softness may be displayed, the hardness associated with both normative masculine positions and the male body always comes first (2007, 475; see also Gee 2014). The hard, muscular male body is demonstrated in the actionmovie scenes (Bethard 2018a), and the viewer knows that it remains, even when clothed. Using Messner and Gee, it can be argued that the muscular body prevents softness from becoming emasculating. Instead, softness becomes a way of creating distance from hypermasculine heroism, while also legitimizing a seemingly new and changed masculine position (Bridges and Pascoe 2014). The mocking of hardness is easy to spot, but I suggest that there are also more ambivalent ironies in the Bethard commercials. The "soft" tattoo of dolphins and hearts in episode two (Bethard 2018b) is over-the-top, and when Mrsic comments that he was thinking about tattooing "cute damned rabbits" instead, the scene directs irony at softness. Mrsic's joke to Ibrahimović in episode three about the child being similar to him draws on discourses of masculine sexual prowess and infidelity rather than softness (in Sweden, it is well known that Ibrahimović is married and has a family). However, Ibrahimović's character refuses to respond in a similar fashion, which functions to reinstate easy going softness as opposed to laddish bragging about womanizing. Also, Mrsic's greeting of the child, emphasized by being filmed from the child's perspective, marks him as truly enthusiastic rather than ironic. The commercials thus do not completely commit to either softness or hardness (see also Benwell 2003). This ambivalence makes room for a range of viewer positions, associated with varying gender politics: The viewer may appreciate "hardness" or regard it with apprehension or as laughable. While advertising for grooming products (e.g., Scheibling and LaFrance 2019) can be seen as attempts to widen men's practices in a more feminine direction, the Bethard commercials, like Messner and de Oca's beer commercials, mock masculine practices while also legitimizing aspects of them. This can be seen as a pragmatic strategy to widen the market, connecting the Bethard commercials to international consumerist culture, where companies may hold on to existing gendered patterns or induce social change, all in the interest of creating new markets and turning a profit (Alexander 2003, 536). Postfeminist Negotiations The mocking and rejection of hardness links the Bethard commercials to wider tendencies in representations of men (Barber and Bridges 2017;Gee 2014;Messner 2007), but as the hybrid masculinity framework suggests, the rejection of hardness seems to work more to legitimize than to radically change current gendered power relations. A way of furthering this analysis is to contextualize it and put it in dialogue with research on postfeminism. Postfeminist representations tend to emphasize individualism, choice, and agency, sending the message that structural problems are of the past, and not the fault of men (Dow 2006;Gill 2014). Postfeminist representations refer to specific versions of feminism (Gill 2016, 612), which may include contextual, local feminist discourses. In Swedish postfeminist representations, ideologies of Swedish gender equality and discourses about Swedish gender-equal men have been prominent (Björklund 2018;Goedecke 2020). In them, the dual carer-dual earner family, including the need for men to change into engaged, present fathers and husbands, have been emphasized (see also Klinth 2002). In the Bethard commercials, Ibrahimović's walk with his neighbor's child and Mrsic's enthusiasm towards it connects with these postfeminist representations, as the men are portrayed enjoying taking care of children, even those that are not their own. At the same time, the child not being Ibrahimović's strips him of responsibility and allows him and his time with the child to remain soft and laid-back. Recent research has pointed to the importance of men's friendships in formulations of Swedish postfeminism (Goedecke 2020). In the commercials, Ibrahimović and Mrsic interact amiably or through friendly banter, drinking coffee, and going to the tattoo parlor together, unlike in the action movie sequence where they interact aggressively and competitively. Significantly, this sequence contains roaring and is devoid of conversation, which is present in the later parts. Conversations have been shown to be central to the idea of emotional, close friendships between Swedish, allegedly gender-equal men (Goedecke 2018). Additionally, research on Swedish postfeminist representations points to an absence of ironic sexism, common in UK and US representations (Gill 2014;Hansen-Miller and Gill 2011;Messner and de Oca 2005). This tallies well with the Bethard commercials. Also typical of postfeminism, the vaguely feminist message is used for commodification: Whether Bethard's customers are soft or hard, they should still bet hard. The only women present are the other visitors at the tea-parlor (Bethard 2018a) but they quickly dissolve into the background as the camera zooms in on Ibrahimović and Mrsic. The absence of women in the commercials contribute to an idea that men have an interest in being soft, that is, that softness is a men's initiative, present even in homosocial groups. The shift from hardness to softness in the first episode symbolizes men changing from an outdated to a "modern masculine attitude" (betting.se 2018), and the homosociality of the commercials renders the feminist critique that has inspired such changes invisible: Men have already changed, on their own accord. As Bonnie J. Dow comments, postfeminist representations of men are "crucial . . . to promoting the idea that women's problems are their own responsibility" (2006,121). Also, the association between softness and laidbackness indicates that such change has not been preceded by struggle, activism, or killjoy activities (Ahmed 2010). In this manner, not only feminism but also feminist methods like activism is rendered redundant and irrelevant: In a fussfree, laid-back way, men have somehow managed to update their "masculine attitudes." Surprisingly, in the light of international examples, such as the Gillette "The best a man can be" campaign from 2019, which resulted in significant online backlash, the commercials did not meet with criticism due to their feminist-inspired content. However, while the Gillette campaign engaged in a number of overt criticisms of men and men's behavior, the Bethard ones are more circumspect, and do not, I suggest, question men or masculine stereotypes to the same extent. Additionally, men have been addressed as "gender-political subjects" since the 1960s in Sweden (Klinth 2002), rendering connections between men and feminism less controversial, which arguably points to a widely disseminated and normalized Swedish postfeminism (Goedecke 2020). Another aspect of postfeminism concerns the actors, Mrsic and Ibrahimović, whose (carefully curated) public personas are significant to the interpretative possibilities of the commercials. As Awasthi and Soraria note (2015, 216), celebrity endorsement is a technique which uses the image and status of a celebrity to promote a particular brand-had the commercials featured other people, such as unknown actors or another football player, the meanings produced would have been different. Ibrahimović's career spans football clubs in several countries, including Spain, France, Italy, and the US, and his public persona as a "guy from the ghetto," his outspokenness, and individualist way of presenting himself to the media together with his style of playing football is well known in Sweden and internationally (Sarrimo 2015). Mrsic is a domestic celebrity, who in addition to being a previous Swedish master of TaeKwanDo and an Olympic coach is known for several "hardboiled" roles in both Swedish and US films (Eijde 2019), and for his involvement in a high-profile robbery in the 1990s. Born in Sweden but of Balkan descent, they have images as tough guys, which enables them to celebrate softness without being emasculated (a similar effect as that discussed earlier à propos the hard masculine body). 3 According to Christine Sarrimo, Ibrahimović's journey from being a poor, immigrant boy, gifted on the football field, to an international superstar echoes "both a myth of the alienated male outsider and his road to fame-and a myth of the autonomous Western subject or lone ranger who-against all odds, and due solely to his own merits-succeeds" (2015, 6). The journey from underdog to celebrity evokes individualist and gendered discourses regarding hard work and the masculine body, but also of class and race. According to Sarrimo (2015), this journey is linked to Ibrahimović's public persona becoming White/r and more respectable. She discusses a Volvo commercial where Ibrahimović hunts, goes ice-bathing, and at last returns home to his waiting wife and children, while also reciting the text of Sweden's national anthem. In this commercial, not only the national anthem but also Ibrahimović's wife becomes a strong signifier of him "finally being integrated and assimilated into 'genuine' Swedishness . . . . The blonde Swedish woman is construed as the 'wild' immigrant's restraining force" (Sarrimo 2015, 10). Using Sarrimo, it can be argued that the postfeminist, gendered implications of softness are linked to race and nationality, and that the Bethard commercials contribute to a Whitening of Mrsic and Ibrahimović's public personas. In Sweden, middle-class, White men have been associated with the culturally celebrated values of gender equality: emotionality, being a present and engaged father, and a nonhomophobic close friend (Klinth 2002;Goedecke 2018). Meanwhile, immigrant and racialized men have been excluded from such positions and seen as unmodern, traditional, and violent (Gottzén and Jonsson 2012). Whiteness has often been centered also in postfeminist discourses, even if this has been critiqued in more recent work (Butler 2013;Gill 2016). In line with this, the drawing upon "softness" and "modern masculine attitudes" links Mrsic and Ibrahimović to the (supposedly) emotional and responsible Swedish gender-equal man, and render them less hard, Whiter, and "more Swedish" (see Lundström 2007 for a discussion about Whiteness and Swedishness). The initial display of the athletic, "foreign," male body can also be linked to this, as the actors can be interpreted as aggressive, even primitive, and later polite, civilized, and dressed-adjectives intimately connected to racist and colonialist discourses centering dichotomies like mind/body, human/animal, and rational/emotional and linking them to White and non-White bodies respectively (e.g., Whitehead 2002, 194ff). However, as noted earlier, the privileging of softness in the commercials is ambivalent. Similarly, Ibrahimović is a contested site, whose popularity and controversiality as a football-player, businessman, and public persona make him hard to categorize. Relatedly, the "Zlatan's version" of the first commercial, with Bosnian dialogue (Bethard 2018d) works to keep Ibrahimović's persona in flux. In it, the dialogue at the tea-parlor amounts to Ibrahimović complaining to Mrsic about taking all the milk, and Mrsic telling Ibrahimović to chill out and have a cookie. 4 The language marks both actors as "immigrant," and thus "less White" (see also Lundström 2007). If softness is interpreted as connected to postfeminism, this indicates that Swedish formulations of postfeminism are being widened beyond the White, Swedish middle class, symbolically welcoming Ibrahimović and Mrsic (this tendency has been pointed to also by Goedecke 2020). On the other hand, "Zlatan's version" can be seen as a resistance against attempts to "Whiten" Mrsic and Ibrahimović and against connecting softness and "modern masculine attitudes" to Swedishness. Bet Soft: The Gendered Politics of Gambling The viewer of a mediated message such as a commercial is often required to do "advertising work," that is, fill in the blanks and make sense of the message(s) in the text (Williamson 1978). The absence of practices, attributes, or effects of gambling, betting, or winning in the material constitutes such a "blank" (see also Kroon 2019). This absence renders the commodity, betting and being a member of the Bethard betting site, vacuous and highly symbolic. Williamson (1978) suggests that one way of constructing meaning in advertisements is by transferring meanings from commodities that are already meaningful to less known commodities. Gambling may be portrayed as glamorous or profitable (Sen and Lou 2016) or may be associated with entertainment; a natural part of having a good time with friends and family (Deans et al. 2016;McMullan and Miller 2010). Gambling may also be associated with sports, as Emily G. Deans et al. show: Sports betting is portrayed as a natural part of being a sports fan and of watching a game with friends, an association strengthened by celebrity endorsements by sports stars. Meanings of sports, being "socially and culturally valued" (Deans et al. 2016, 2) are transferred to gambling, which helps disassociate it from problem gambling and from medical discourses emphasizing addiction. Importantly, the Bethard slogan disassociates betting hard from being hard, positing that aggressive or daring bets may be placed in a soft and laid-back way. I suggest that this, similar to how sport functions in Deans et al. (2016), disassociates betting with Bethard from compulsive gambling, thus rehabilitating it and rendering it harmless (see also Kroon 2019). More specifically, I suggest that the emphasis on softness focuses the "hypothetical 'deficit', the difference between a pathologized 'problem gambler' and an ideal-type 'recreational' gambler" that Charles Livingstone and Richard Woolley (2007, 364) suggest is constructed within ideas about "responsible gambling." The responsible gambling paradigm seeks to "minimiz[e] potential gambling-related harms while maintaining gambling as a recreational activity" (Blaszczynski et al. 2011, 566), and sees gambling in itself as harmless and unproblematic. This discourse is based on a neoliberal notion of individuals, once properly informed, as rational, and emphasizes personal responsibility (see also Alexius 2017). Thus, the vulnerabilities of certain individuals, rather than gambling as a practice or an industry, constitute the problem, which is best addressed through selfhelp groups or better information. As Livingstone and Woolley note, "[t]he option of making the gambling product safe is not available. What is needed is some finetuning of the practices of an errant coterie of imprudent consumers" (2007,364). The emphasis on being soft while betting hard in the Bethard commercials resonate with this. Being soft, as opposed to obsessive, renders betting unproblematic and connects it to the "ideal-type 'recreational' gambler" (Livingstone and Woolley 2007, 364). Thus, softness (along with the obligatory messages about gambling responsibly 5 ) functions as the "fine-tuning" Livingstone and Woolley describe. However, the Bethard company slogan "Winners dare more" indicates that a better must be daring, bold, and risk-taking in order to win, which together with the imperative to Be Soft becomes an untenably contradictory message. "Soft betters" may, one surmises, be able to take this in their stride, but others must tread the fine line between being or betting too soft and not being soft enough, that is, being obsessive or compulsive in relation to gambling. Betting hard while being soft thus becomes a question of control and a balancing act the better, not the industry, is left to deal with. While male betters are undoubtedly centered and normalized in the commercials, displays like this also render them vulnerable to consumerist and "responsible gambling" discourses. Performing feminist studies on men and gambling (or men in consumerist culture more generally) necessitates highlighting this complexity while also emphasizing that men's vulnerabilities do not only concern men but may affect others and be related to other structural issues. For instance, international research points to strong links between gambling and IPV against women (intensified whether the gambler is male or female) (Hing et al. 2020). Swedish research points to links between being a CSO (concerned significant other) to a gambler and being subject of violence, a connection found among both male and female CSOs but significantly stronger among women . Also, young, immigrant, and working-class men are especially vulnerable to gambling problems in Sweden (Svensson 2013, 10). The urgings to bet soft do not address these or other social problems connected with gambling. Instead, by connecting allegedly reformed postfeminist masculine positions to the idea of the responsible gambler, the commercials serve to individualize and obscure prevalent problems. Concluding Discussion In this article, I have discussed three sports betting commercials as arenas where gender politics, masculine positions, and race are negotiated, and connected softness to postfeminism and to ideas about Swedish gender-equal men and "responsible" gamblers. Returning to the question posed at the outset of the article, about the significance of changed representations of men, I have shown that the depiction of both men and gambling as soft is concomitant with developments in international consumerist culture, where the acceptance of traditional, or "hard" masculine positions is lessening while unjust gendered power relations and relations between different (e.g., racially or classed) groups of men often fail to be questioned. The commercials represent another facet of the Swedish gender-equal man; he is not only an (allegedly) devoted father but also a "soft," "ideal-type 'recreational'" gambler. This normalizes and rehabilitates gambling, which, when entered into "softly," is apparently harmless. Importantly, the emphasis on "soft" betting produces distinctions between gamblers, normalizing some while implying that others are excessive, incomprehensible, and personally responsible. The widening of postfeminist discourses in stressing Ibrahimović's "Bosnianness" constitutes a development in Swedish racial and gender politics, but this application of consumerist and postfeminist logics can also be seen as an attempt of widening the market even further. The idea that postfeminist discourses center whiteness while also interpolating people of color has been articulated (Butler 2013), but it has largely been framed as an issue concerning women. My analysis points to the importance of discussing men, race, and nationality in postfeminist consumerist culture. The article shows the importance of connecting research on men in consumerist culture, including work on "hybrid" or "flexible" masculinities, to postfeminism, but also points to the continued need for further research on gender, masculinity, and gambling in cultural representations, lived experience, and gambling policy, in Sweden and beyond. The incorporation of postfeminist commercial logics in gambling advertising is a development that must be followed closely to prevent further depolitization of both these fields. Acknowledgments Many thanks to Jenny Björklund for her insightful comments on an earlier version of this text, to Jasmina Sargac for help with the translation, and to the two anonymous reviewers for helpful comments, which greatly improved the text. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2021-08-02T00:06:46.157Z
2021-04-26T00:00:00.000
{ "year": 2021, "sha1": "951a331f0740f5a110d8714a51e53b14100df710", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1097184X211012739", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "19fbbe2938eadcda0f5af132047d84b93a0395f9", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
261440664
pes2o/s2orc
v3-fos-license
The Influences of gradient color on the weight perception and stability perception: A preliminary study Gradient colors are widely used in product design. The variation of gradient colors muting a color as a series of steps from bright to dull creates a soft and gradual impression while also affecting people's perceptions. This study manipulates the types of gradient colors to explore the relationship between color gradients and perception of stability to determine whether weight perception plays a role. In the case of controlling for aesthetic differences, the study manipulated two types of color gradients (dark colors fading upward from the bottom versus downward from the top) and measured the perceptions of product stability. In the same hue, an upward gradient gives a stronger perception of stability. In addition, gradient colors significantly influence women's perception of stability more than men's. The study also investigated the mediating effect of weight perception: participants evaluated color fading-upward products with less weight relative to fading-downward colors. Furthermore, dark colors fading upward from the bottom lead to a stronger perception of weight, increasing the stability perception of the object. Finally, to aid future research, we discuss the practical implications of the current findings for areas such as sensory marketing, as well as possible directions for future research. the top and light color from the bottom gradient to dark color on the top (as in Figure 1) as the independent variable and people's perception of product stability as the dependent variable, in which product weight perception is selected as the mediating variable. We hypothesize that the monochrome brightness gradient type of transition from a bottom dark color to top light color brings stronger weight perception and thus increases people's perception of object stability, and the research framework is shown in Figure 2, with the following hypothesis: H1: Dark upward gradient of monochrome brightness gradient type (dark color transitioning from the bottom to the top light color) brings a stronger perception of stability. H2: Dark upward gradient of monochrome brightness gradient type (dark color transitioning from the bottom to the top light color) brings a stronger perception of weight. H3: The weight perception caused by the monochrome brightness gradient type plays a mediating role in the process of stability perception. Participants The research data were collected from Wenjuanxing (https://www.wjx.cn) in China, an online survey company specializing in providing questionnaires and data collection services. A total of 132 participants volunteered to participate in this study. The participants were 18-30 years of age or older, of whom 44 were male and 88 were female. All participants had normal visual acuity or corrected visual acuity and no color blindness or color weakness. All of them signed written informed consent. The study was approved by the academic ethics committee of the first author's university. Materials In the early stage of the experiment, the participants were asked to self-report their visual acuity from the certificate of normal visual acuity or corrected visual acuity and no color blindness or color weakness. Also, the participants were asked to input the numbers seen on the plates selected from the Ishihara test. The Ishihara test is the most popular and generally considered to be the most efficient for screening red and green congenital defects (Melamud et al., 2004). The test plates are one from the "Demonstration plates" (designed to be visible by all persons, whether normal or color vision deficient. For demonstration to confirm that the participant has no other visual impairment) and two from the "Transformation plates" (individuals with color vision defect should see a different figure from individuals with normal color vision). The screening criteria for the participants are answering two of the three questions correctly to prove normal visual acuity or corrected visual acuity and no color blindness or color weakness. The stimuli consisted of six kinds of products: coffee mugs, water bottles, vases, file boxes, facial cleansing, and pen containers, all of which are common products and stability is one of the more sensitive consumer attributes when purchasing these products. Considering those aesthetics are one of the important factors influencing product consumption decisions, products with different color gradient types may produce certain aesthetic attributes (Gegenfurtner & Kiper, 2003). We used pre-experiments for material assessment. Two types of color brightness gradients (upward and downward gradient, see Figure 3) were set for the same product. The color of each product was chosen based on the most commonly used product colors in the market to simulate picking products on consumer scenarios. In total, 23 participants who had not participated in the formal experiment joined the preexperiment, and all subjects rated the aesthetics of the six products. The evaluation was done using a 5-point Likert scale, where 1 indicates "very unattractive" and 5 indicates "very attractive." The results revealed that the t-test for the aesthetics of the six products with two color gradient types, all with p > .05, were coffee mug (0.106), water bottle (0.068), vase (0.362), file box (0.315), facial cleansing (0.119), and pen container (0.170), indicating that there was no significant difference in the aesthetics of the products with the two gradient conditions, as shown in Table 1. Procedures A single-factor within-subjects design was used for the experiment. The independent variable is the type of color gradient, with 2 levels, dark upward gradient (lighter on top and darker on bottom) and dark downward gradient (darker on top and lighter on bottom). The dependent variables are the stability and perception of the product. To avoid possible interactions between different gradient types of the same product in the perception assessment, we used a between-item approach, where the same participant assessed the perception of three products (e.g., coffee mug, water bottle, and vase) of one type of gradient (e.g., upward gradient) and another three products (e.g., file box, facial cleanser, and pen holder) of another type of gradient (e.g., downward gradient). According to a given consumption scenario with particular stability requirements, the participants were asked to successively score the degree of stability of the three products with an upward gradient and the other three products with a downward gradient. The score was scored on a 7-point scale, with 1 indicating "very unstable" and 7 indicating "very stable". After the stability assessment, the subjects were also required to measure the weight perceptions of the products. Referring to the method of Hagtvedt and Brasel (2017), this experiment required the participants to evaluate the weight of each product in kg or g within a given reasonable weight range (Hagtvedt & Brasel, 2017). Results A paired samples t-test was used to find a significant effect of asymptotic type on perceived product stability, t(131) = 15.05, p < .001, Cohen's d = 2.63, indicating that perceived stability was significantly higher for the upward gradient (M = 5.74, SD = 1.37) than for the downward gradient (M = 2.64, SD = 1.56) product. Since the weight range of each type of product varies with a large gap, the number of weight perceptions was processed using a standardized treatment followed by a pairedsamples t test, which revealed a significant effect of gradient type on product weight perception, t(131) = 2.46, p = .015, Cohen's d = 0.43, indicating that the weight perception of the upward gradient (M = 0.11, SD = 0.71) was significantly greater than that of the downward gradient (M = −0.11, SD = 0.73). Using Pearson correlation analysis, a pairwise correlation was found between asymptotic type and weight perception and stability perception, as shown in Table 2. Model 4 of the PROCESS macro (Hayes, 2013) was used to test the indirect effect of weight perception in the relationship between gradient types and stability perception (see Figure 4). It was found that color gradient types significantly and positively influenced stability perception (b = 3.02, p < .001) and weight perception (b = 0.22, p = .013), weight perception significantly and positively influenced stability perception (b = 0.36, p = .004), and weight perception mediated between gradient types and stability perception with a 95% CI of [0.011, 0.177] and does not contain 0; thus, the mediating effect was significant. The results also showed the differences in stability perception between males and females, with females having a higher perception of stability compared to males, t(130) = 2.35, p = .02. Analysis of variance (ANOVA) on the effect of gender and gradient on stability assessment. It was found that the effect of gender on the estimation of stability was not significantly different, F(1, 130) = 0.55, p = .459; the effect of gender on the estimation of weight was not significantly different, either, F(1, 130) = 0.11, p = .744. However, there is a significant difference between gender and stability estimation under different gradient types, F(1, 130) = 5.72, p = .018. The simple effect test found that under type A, the gender difference is not significant, F(1, 130) = 1.88, p = .173, but under type B, the gender difference is significant, F(1, 130) = 6.55, p = .012, and the estimation of female stability (M = 5.95, SD = 1.20) were significantly higher than males (M = 5.32, SD = 1.58). This result seems to indicate that gender and gradient types jointly affect the perception of stability, with women being more sensitive to the perception of downward gradient. Discussion In color psychology studies, many early studies overlooked the variables of brightness and referred only to hues (Rider, 2010). There are even fewer theoretical or empirical studies on the effect of color on the perception of stability. In the process of examining the relationship between color gradients and stability perception by manipulating the types of gradient colors, we proposed the research hypothesis that the type of monochrome brightness gradient triggers the perception of weight, which in turn affects people's perception of product stability. We first control for the effect of aesthetic differences on the experiment, and the results of the pre-experiment confirm that there is no significant difference in the assessment of product aesthetics under the different color gradient conditions of the current study. The results of the formal experiments showed that Notes. * * p < .01, * * * p < .001. the upward gradient produced significantly higher stability perceptions than the downward gradient, and it also produced significantly higher weight perceptions than the downward gradient, and these results supported hypotheses 1 and 2. In addition, it was illustrated in the mediation test analysis that weight perception mediated the effect of color gradient on stability perception, supporting hypothesis 3. The findings, consequently, provide direction for the exploration of the little-explored relationship of color gradient type on stability perception and shed light on the underlying processes in which the effect of weight perception plays a mediating role. Interestingly, the results also revealed differences in stability perception caused by the downward gradient between males and females, with females being more sensitive to the stability perception of the downward gradient compared to males. This may be related to the gender differences in color metaphors, also factors that influence color perception and application include gender (Singh & Srivastava, 2011;Vanston & Strother, 2017). Moreover, the significance of the mediating effect of weight perception in the process of color gradient influence on stability perception was relatively small from the experimental results, probably due to the fact that visual illusions do not always extend to the tactile realm (Hagtvedt & Brasel, 2017). When consumers evaluate an object visually, there may be a disconnect between the vision perceptual processing and the corresponding tactile information (Walker et al., 2010). Future research could investigate in depth how these issues are addressed in the minds of consumers. Additionally, extant research further indicates that emotions can enhance contrast sensitivity (Phelps et al., 2006). This demonstrates that the role of weight in the effect of stability perception may depend in part on the current emotional state of the viewer. The interaction of color perception with other sensory forms remains largely unexplored and could represent a fruitful area for further exploration. Nevertheless, there are some limitations to this experiment. Since this experiment was conducted online, product color presentation relied on electronic methods of presentation. We cannot rule out the possibility that the color effect of the electronic presentation may vary depending on the screen of the mobile device due to differences in the environment or the hardware used by the participants. At the same time, due to the use of different shapes of products in this experiment, there might be an interaction between color and shape in the perceptual processing of evaluating object stability (Lupo, 2015). In addition to this, however, our experiments suffer from the general limitations of color research-it is difficult to ensure that the experimental stimuli reflect the exact attributes specified (Hagtvedt & Brasel, 2017), since the effect of color on people's mental perception is usually unconsciously perceived (Löffler, 2017). As this experiment was in the form of an online questionnaire, in-depth interviews were not conducted with each subject regarding the motivation or reason for the choice to further confirm that the mechanism behind the effect was true as shown in the experimental results. Future research could also focus on the extent to which the color effect relies on automatic or conscious processes. Very little research has so far focused on combined color effects (Deng et al., 2010). The study focuses on the brightness and there is no combining such as hues to prove if this effect works equally well for all hues. Future research could continue to explore the relationship between different hue gradients (e.g., red → cyan, yellow→blue) and stability. Furthermore, the inconsistent use of color terminology in the current literature may complicate the interpretation of study results. The terms brightness and value, lightness and illuminance are used interchangeably, but sometimes have different meanings (Labrecque et al., 2013;Stone et al., 2006;Walker et al., 2010). As these situations arise, it is difficult and challenging to interpret research results using a series of terms and concepts that are difficult to clearly define. As a researcher, we need to be clear about the concepts referred to by the terms involved in our research discourse so that they can be used for comparison with future research.
2023-09-02T15:20:51.339Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "a4fa841493e6f0dbe26054eecaa4b49cf78c251d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Sage", "pdf_hash": "77bba1b39ddcc6560e3a26d757f51b57f56969ff", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Medicine" ] }
257247711
pes2o/s2orc
v3-fos-license
Correction of Asymmetric Bowtie Corneal Astigmatism with a Toric Intraocular Lens: Outcomes and Accuracy of Measurement Modes The outcomes of toric intraocular lens (IOL) implantation in correcting asymmetric bowtie corneal astigmatism remain uncertain. The accurate measurement of corneal astigmatism is essential for surgical planning. In this prospective cohort study, patients with asymmetric or symmetric bowtie corneal astigmatism who underwent toric IOL implantation were recruited. Preoperative corneal astigmatism was measured with an IOLMaster and Pentacam (including the simulated keratometry (SimK), total corneal refractive power (TCRP), and wavefront aberration (WFA) modes). At 3 months after surgery, the refractive outcomes and residual astigmatic refractive errors were compared with patients with symmetric bowtie astigmatism. The prediction errors (the differences between the calculated actual corneal astigmatism and the measured corneal astigmatism) were compared among the different measurement modes in the asymmetric group. There were no differences in residual astigmatism between the asymmetric and symmetric groups. However, the mean absolute residual astigmatic refractive error was greater in the asymmetric group than in the symmetric group (0.72 ± 0.42 D vs. 0.53 ± 0.24 D, p = 0.043). In the asymmetric group, the mean absolute prediction errors for the IOLMaster, SimK, TCRP and WFA modes were 0.53 ± 0.40, 0.56 ± 0.47, 0.68 ± 0.52, and 0.43 ± 0.40 D, respectively. The Pentacam WFA mode was the most accurate mode (p < 0.05). The absolute prediction error of the WFA mode was positively correlated with the total corneal irregular astigmatism higher-order aberrations and coma (r = 0.416 and r = 0.473, respectively; both p < 0.05). Our study suggests toric IOL implantation effectively corrected asymmetric bowtie corneal astigmatism. The Pentacam WFA mode may be the most accurate measurement mode, although its accuracy decreased as asymmetry increased. Introduction In recent years, cataract surgery has evolved from a sight-restoring procedure to a refractive procedure. Epidemiological studies have shown that 15-29% of cataract patients have more than 1.5 diopters (D) of corneal astigmatism before surgery [1,2], which cannot be corrected with a conventional monofocal intraocular lens (IOL). Therefore, toric IOLs are usually used in these patients [3,4] and their effectiveness is now well established in regular and symmetric astigmatic eyes [4][5][6]. Contrary to regular astigmatism, irregular astigmatism is another type of astigmatism and defined as an astigmatic state that could not be corrected by a sphero-cylindrical lens. The prevalence of irregular astigmatism is about 40% [7,8]. According to computerized topographic systems, irregular astigmatism can be further classified into different types. Asymmetric bowtie astigmatism is an important one and characterized by unequal slopes of the hemimeridians along a single meridian [9]. The treatment of asymmetric bowtie corneal astigmatism remains challenging for cataract surgeons, and relevant studies are sparse. A previous investigation showed that the residual astigmatism after toric IOL implantation could be as high as 3.0 D in eyes with asymmetric bowtie or other irregular corneal astigmatisms [10]. The difficulty of correcting these astigmatisms lies in the development of a proper surgical plan to correct asymmetric bowtie astigmatism with a symmetric toric IOL. Measurement accuracy is essential for the surgical plan of patients with corneal astigmatisms. Inaccurate measurement may result in significant residual astigmatism after surgery, which can significantly affect postoperative visual acuity. A study showed that postoperative refractive astigmatism largely affects postoperative visual acuity; 1 diopter left in postoperative refractive astigmatism resulted in 1.5 lines of reduction in postoperative visual acuity at a distance [11]. In recent years, biometric instruments, such as IOLMaster and Pentacam, have become widely used to measure corneal astigmatism. IOLMaster uses telecentric keratometry to reduce the disruption to image acquisition caused by patient movements [12] and usually calculates the corneal power from the anterior surface [13]. Pentacam reconstructs the shape of the anterior and posterior corneal surfaces using Scheimpflug photography principles [14]. The simulated keratometry (SimK) mode (which measures the anterior corneal keratometric power), total corneal refractive power (TCRP) mode (which analyzes the overall refractive status of the cornea, including the anterior and posterior surfaces), and wavefront aberration (WFA, which is the deviation of a reflected wave from a reference unaberrated wave) mode are the three Pentacam modalities that are most commonly used to evaluate corneal astigmatism [15]. However, using different measure methods of the devices lead to significant variability in the values of results. Previous studies showed that using TCRP when concerning the effect of posterior corneal astigmastism was more accurate than SimK to calculate the toric IOL power in eyes with symmetric corneal astigmatism [16,17]. Another study showed that IOLMaster measured significantly greater keratometry readings in the steep axis for all participants. The keratometry and WTW measurements of the IOLMaster and Pentacam cannot be used interchangeably in keratoconic patients [18]. However, it is unclear which measurement mode is most accurate in the surgical planning of toric IOL implantation in eyes with asymmetric bowtie corneal astigmatism. Therefore, in the present study, we first evaluated the refractive outcomes of toric IOL implantation in eyes with asymmetric bowtie astigmatism, and then compared the prediction errors between four different measurement modes: IOLMaster, SimK, TCRP, and WFA of Pentacam. Patients Patients who underwent cataract surgery with toric IOL implantation (AT Torbi 709M IOL, Carl Zeiss AG, Oberkochen, Germany) between November 2018 and March 2020 at the Eye and ENT Hospital of Fudan University were continuously recruited for this study. The symmetry of corneal bowtie in all patients was evaluated with a Pentacam HR (Oculus Inc., Wetzlar, Germany). Based on the corneal topography, the inferior-superior index (I-S) was defined as the corneal curvature of the inferior points minus the corneal curvature of the superior points on a steep axis at 5 mm on the sagittal curvature (front) image, whereas the superior-inferior index (S-I) was defined conversely. The skewed radial axis index (SRAX) was defined as the axis difference between the two lobes of the bowtie [13]. Asymmetric bowtie corneal astigmatism was defined as I-S > 1.5 D, S-I > 2.5 D, or SRAX > 22 • [19]. Only patients with an I-S > 1.5 D or an S-I > 2.5 D were enrolled in our study. Patients with corneal pathologies, such as keratoconus, pterygium or corneal scars, contact lens wear within the preceding 2 weeks, glaucoma, strabismus, previous trauma or ocular surgery, zonular weakness, severe fundus pathology, and uveitis, were excluded. Patients who experienced intraoperative or postoperative complications, such as posterior capsular rupture, severe and persistent corneal edema, pupillary capture of the IOL, or misalignment of the toric IOL by more than 10 • , those with postoperative visual acuity less than 20/63, and those who were lost to follow-up, were also excluded from the analyses. Finally, 30 eyes in the asymmetric group and 30 eyes matched with age, gender and corneal astigmatism magnitude in the symmetric groups were analyzed. The flow chart is shown in Figure 1. of the bowtie [13]. Asymmetric bowtie corneal astigmatism was defined as I-S > 1.5 D I > 2.5 D, or SRAX > 22° [19]. Only patients with an I-S > 1.5 D or an S-I > 2.5 D enrolled in our study. Patients with corneal pathologies, such as keratoconus, pteryg or corneal scars, contact lens wear within the preceding 2 weeks, glaucoma, strabis previous trauma or ocular surgery, zonular weakness, severe fundus pathology, and itis, were excluded. Patients who experienced intraoperative or postoperative comp tions, such as posterior capsular rupture, severe and persistent corneal edema, pupi capture of the IOL, or misalignment of the toric IOL by more than 10°, those with pos erative visual acuity less than 20/63, and those who were lost to follow-up, were als cluded from the analyses. Finally, 30 eyes in the asymmetric group and 30 eyes mat with age, gender and corneal astigmatism magnitude in the symmetric groups were lyzed. The flow chart is shown in Figure 1. Preoperative Examinations The routine preoperative examinations included the assessment of visual acui slit-lamp examination, fundoscopy, biometric measurements (IOLMaster700, Carl Z AG, Oberkochen, Germany), corneal topography (Pentacam HR, Oculus, Berlin, many), and B scans (Alcon Laboratories, Fort Worth,USA). The spherical and cylind IOL power and the axis of each toric IOL were calculated based on the anterior cor astigmatism measured with IOLMaster, using the IOL manufacturer's online calcu (version 1.5 containing the predicted posterior corneal astigmatism, https://zcalc.m tec.zeiss.com, accessed on 1 January 2023). The total corneal irregular astigmatism hig order aberrations and coma aberrations in the 4 mm zone, measured with Pentacam, w also recorded for each eye. Surgical Procedures All the cataract surgeries were performed under topical anesthesia by one ex enced surgeon. A 2.6 mm temporal clear corneal incision was created, followed by mm continuous curvilinear capsulorhexis, phacoemulsification, and the removal o cortex. The toric IOL (CT Asphina 709MP, Carl Zeiss AG, Oberkochen, Germany) implanted in the capsular bag under navigation with the Callisto Eye System (Carl Z AG, Oberkochen, Germany). After the residual viscoelastics (DisCoVisc, Alcon Labo ries, Fort Worth, USA) were removed from above and below the IOL, the incision hydrated, and the final IOL axis was checked and recorded. No stitches were used in eye. The postoperative medications were the same in all patients. Postoperative Follow-Up Patients underwent follow-up examinations 3 months after surgery, including a sessment of visual acuity and manifest refraction at 5 m, corneal topography (Penta Preoperative Examinations The routine preoperative examinations included the assessment of visual acuity, a slit-lamp examination, fundoscopy, biometric measurements (IOLMaster700, Carl Zeiss AG, Oberkochen, Germany), corneal topography (Pentacam HR, Oculus, Berlin, Germany), and B scans (Alcon Laboratories, Fort Worth, USA). The spherical and cylindrical IOL power and the axis of each toric IOL were calculated based on the anterior corneal astigmatism measured with IOLMaster, using the IOL manufacturer's online calculator (version 1.5 containing the predicted posterior corneal astigmatism, https://zcalc.meditec.zeiss.com, accessed on 1 January 2023). The total corneal irregular astigmatism higher-order aberrations and coma aberrations in the 4 mm zone, measured with Pentacam, were also recorded for each eye. Surgical Procedures All the cataract surgeries were performed under topical anesthesia by one experienced surgeon. A 2.6 mm temporal clear corneal incision was created, followed by a 5.5 mm continuous curvilinear capsulorhexis, phacoemulsification, and the removal of the cortex. The toric IOL (CT Asphina 709MP, Carl Zeiss AG, Oberkochen, Germany) was implanted in the capsular bag under navigation with the Callisto Eye System (Carl Zeiss AG, Oberkochen, Germany). After the residual viscoelastics (DisCoVisc, Alcon Laboratories, Fort Worth, USA) were removed from above and below the IOL, the incision was hydrated, and the final IOL axis was checked and recorded. No stitches were used in any eye. The postoperative medications were the same in all patients. Postoperative Follow-Up Patients underwent follow-up examinations 3 months after surgery, including an assessment of visual acuity and manifest refraction at 5 m, corneal topography (Pentacam HR), and toric IOL axis alignment using a retroillumination photograph obtained with an OPD-Scan III corneal aberrometer (Nidek Co., Ltd., Gamagori, Japan) after mydriasis. The uncorrected distance and corrected distance visual acuities (in logarithm of the minimal angle of resolution), residual astigmatism, and misalignment of the IOL were recorded for each eye. The residual astigmatic refractive error, defined as the difference between the actual and predicted residual astigmatisms, was calculated and compared between the asymmetric and symmetric groups. The centroid and absolute mean values of the residual astigmatic refractive error were calculated in this study. Actual Surgically Induced Astigmatism The individual surgically induced astigmatism (SIA) of the cornea per eye was determined postoperatively using the vector difference between the postoperative and preoperative TCRP on Pentacam. Analysis of the Accuracy of Corneal Astigmatism Measurements To determine the accuracy of the measurements of corneal astigmatism in the asymmetric group, we analyzed the prediction errors (the differences between the estimated preoperative corneal astigmatism and the measured preoperative corneal astigmatism) obtained with IOLMaster and three Pentacam modes (SimK, TCRP, and WFA). The estimates of preoperative corneal astigmatism were calculated with the following equation, based on the previous studies performed by Eom et al. [20,21], as follows: Estimated preoperative corneal astigmatism = postoperative residual astigmatism on the corneal plane (D residual-cornea ) − toric IOL cylinder power on the corneal plane (D IOL-cornea ) − actual surgically induced astigmatism of the cornea. Postoperative residual astigmatism (D residual ) and toric IOL cylinder power (D IOL ) were first converted to the corneal plane with the following formulae: The net corneal power (K) was measured with Pentacam and the effective lens position (ELP) was measured according to our previous study [22]. The alignment of the actual toric IOL axis, taken from the retroillumination photograph obtained postoperatively with OPD-Scan III, was used for the vector analysis. We then calculated the prediction errors with the following equation: Prediction error = preoperative corneal astigmatism measured with each modality − estimated preoperative corneal astigmatism. Both the centroid and absolute mean values for the prediction errors were calculated in this study. Statistical Analysis The vector analysis was obtained using an online calculator (www.isrs.org, accessed on 2 January 2023). All analyses were conducted with the SPSS software (version 23.0, IBM, San Francisco, CA, USA). Continuous data are presented as means ± standard deviations. Differences in continuous variables were compared between groups with Student's t test, and differences in categorical variables were compared with a χ 2 test. A paired t test was used to compare data before and after surgery within the same group. Multiple linear regression was used to analyze the factors influencing the absolute residual astigmatic refractive error. Friedman's two-way analysis of variance by rank with a post hoc paired analysis was used to compare the absolute prediction errors in corneal astigmatism among the different measurement modes. p values of <0.05 were considered statistically significant. Table 1 shows the characteristics of both groups. In the asymmetric group, 13 eyes (43.3%) were with I-S > 1.5 D and the other 17 eyes (56.7%) were with S-I >2.5 D. The age, sex, operated eye, axial length, and corneal astigmatism determined with IOLMaster did not differ between the two groups (all p > 0.05; Student's t test for age, axial length, and corneal astigmatism, and χ 2 test for sex and operated eye). The difference in the distribution of corneal astigmatism orientations between the two groups was not significant (χ 2 test, p > 0.05). However, the total irregular corneal astigmatism higher-order aberrations and coma aberrations were significantly greater in the asymmetric group than in the symmetric group (Student's t test, both p < 0.05). Postoperative Refractive Outcomes in the Two Groups The postoperative misalignment of the toric IOL axis was similar in the two groups (4.33 ± 2.97 • vs. 4.17 ± 2.74 • , Student's t test, p = 0.829). In the asymmetric and symmetric groups, the centroid mean values for the postoperative residual astigmatism were 0.40 D@176 • and 0.28 D@174 • , respectively, and the absolute mean values were 0.69 ± 0.41 D and 0.52 ± 0.30 D, respectively (Student's t test, p = 0.066), both of which were significantly lower than the preoperative corneal astigmatism (paired t test, both p < 0.05). The residual astigmatism was more than 0.5 D in 47% (14/30) of eyes in the asymmetric group and in 30% (9/30) of eyes in the symmetric group (χ 2 test, p = 0.184). Among the eyes with more than 0.5 D of residual astigmatism, nine were overcorrected in the asymmetric group, versus one eye in the symmetric group (Fisher's exact test, p = 0.029). Figure 2 shows the double-angle plots of the residual astigmatic refractive errors in both groups. The centroid residual astigmatic refractive errors were 0.53 D@175 • in the asymmetric group and 0.31D@173 • in the symmetric groups. The mean absolute residual astigmatic refractive error was significantly greater in the asymmetric group than in the symmetric group (0.72 ± 0.42 D vs. 0.53 ± 0.24 D, respectively, Student's t test, p = 0.043). The asymmetric group had more eyes with residual astigmatic refractive errors more than 1 D compared with the symmetric group (Table 2). After adjustment for age, sex, operated eye, axial length, corneal astigmatism value, and misalignment of the toric IOL axis, we found that asymmetric bowtie astigmatism was a significant risk factor (relative to symmetric bowtie astigmatism) for the absolute residual astigmatic refractive error (multiple linear regression, β = 0.309, p = 0.021). Corneal Astigmatism Prediction Errors in Asymmetric Group The double-angle plots of the prediction errors measured using the four different modes in the asymmetric group are shown in Figure 3A-D. The centroid prediction errors obtained with the IOLMaster, SimK, TCRP, and WFA modes were 0.23 D@80°, 0.25 D@80°, 0.26 D@78°, and 0.10 D@88°, respectively. The mean absolute prediction errors of the four modes were 0.53 ± 0.40, 0.56 ± 0.47, 0.68 ± 0.52, and 0.43 ± 0.40 D, respectively, and the values differed significantly among all four modes (Friedman's two-way analysis of variance, p = 0.002). A post-hoc paired test showed that the WFA mode had a significantly lower absolute prediction error than the TCRP mode (p = 0.002). However, there were no significant differences between the other modes. Corneal Astigmatism Prediction Errors in Asymmetric Group The double-angle plots of the prediction errors measured using the four different modes in the asymmetric group are shown in Figure 3A-D. The centroid prediction errors obtained with the IOLMaster, SimK, TCRP, and WFA modes were 0.23 D@80 • , 0.25 D@80 • , 0.26 D@78 • , and 0.10 D@88 • , respectively. The mean absolute prediction errors of the four modes were 0.53 ± 0.40, 0.56 ± 0.47, 0.68 ± 0.52, and 0.43 ± 0.40 D, respectively, and the values differed significantly among all four modes (Friedman's two-way analysis of variance, p = 0.002). A post-hoc paired test showed that the WFA mode had a significantly lower absolute prediction error than the TCRP mode (p = 0.002). However, there were no significant differences between the other modes. The proportions of eyes with an absolute prediction error of ≤0.5 D or ≤1.0 D according to the measurement mode are shown in Figure 4. The proportions were the greatest in the WFA mode, with an absolute prediction error of ≤0.5 D for 83.3% of eyes and ≤1.0 D for 90% of eyes. The absolute prediction error of the WFA mode was also positively correlated with the total irregular astigmatism higher-order aberration and the coma aberration of the cornea (Pearson's r = 0.416 and r = 0.473, both p < 0.05; Figure 5). The absolute prediction error of the WFA mode was also positively correlated with the total irregular astigmatism higher-order aberration and the coma aberration of the cornea (Pearson's r = 0.416 and r = 0.473, both p < 0.05; Figure 5). Discussion Previous studies have demonstrated the good performance of toric IOLs in eyes with regular symmetric corneal astigmatism following improvements in biometric accuracy [4]. However, many surgeons remain cautious about the use of toric IOLs for correcting asymmetric bowtie astigmatism because there is no ideal surgical plan. Nevertheless, a viewpoint has recently been reached that even partial correction of corneal astigmatism during cataract surgery might benefit these patients, as long as it is properly planned [10]. Therefore, to provide evidence supporting this viewpoint, we first evaluated the refractive outcomes of toric IOL implantation in eyes with asymmetric bowtie astigmatism, and then compared the accuracy of different measurement modes within this context. We found that, although the residual astigmatic refractive error was greater in the asymmetric group than in the symmetric group, the postoperative residual astigmatism of the asymmetric group was significantly improved relative to the preoperative state. For these eyes, the Pentacam WFA mode may be the preferable measurement mode. However, its accuracy decreased as the degree of asymmetry increased. Our data show that toric IOLs can be effectively used to correct asymmetric bowtie corneal astigmatism. Many studies have examined the efficacy of toric IOL implantation during cataract surgery. A systematic review and meta-analysis showed that the average residual astigmatism at 3-6 months after toric IOL implantation was 0.53 D, ranging from 0.18 to 0.77 D [4], which is similar to our results for the symmetric group. Cases referred to as 'asymmetric astigmatism' in previous studies often had keratoconus, pterygium, or history of corneal surgery, for example [10,[23][24][25], which are typically regarded as irregular astigmatism and cannot be completely corrected. The mean residual astigmatism in these cases was 0.87 ± 1.10 D, ranging from 0 to 5.50 D, which is higher than our result for the asymmetric group. This discrepancy arises because we focused on a specific type of asymmetric astigmatism, asymmetric bowtie astigmatism, in which the power on both sides differs, although the orientations are same. Therefore, there are symmetric components in this type of asymmetric astigmatism. We investigated whether the astigmatism in these patients could be corrected with toric IOLs. We found that the postoperative residual astigmatism was significantly reduced, from 2.18 ± 0.70 D to 0.69 ± 0.41 D, confirming that cataract surgery with toric IOL implantation is effective in correcting asymmetric Discussion Previous studies have demonstrated the good performance of toric IOLs in eyes with regular symmetric corneal astigmatism following improvements in biometric accuracy [4]. However, many surgeons remain cautious about the use of toric IOLs for correcting asymmetric bowtie astigmatism because there is no ideal surgical plan. Nevertheless, a viewpoint has recently been reached that even partial correction of corneal astigmatism during cataract surgery might benefit these patients, as long as it is properly planned [10]. Therefore, to provide evidence supporting this viewpoint, we first evaluated the refractive outcomes of toric IOL implantation in eyes with asymmetric bowtie astigmatism, and then compared the accuracy of different measurement modes within this context. We found that, although the residual astigmatic refractive error was greater in the asymmetric group than in the symmetric group, the postoperative residual astigmatism of the asymmetric group was significantly improved relative to the preoperative state. For these eyes, the Pentacam WFA mode may be the preferable measurement mode. However, its accuracy decreased as the degree of asymmetry increased. Our data show that toric IOLs can be effectively used to correct asymmetric bowtie corneal astigmatism. Many studies have examined the efficacy of toric IOL implantation during cataract surgery. A systematic review and meta-analysis showed that the average residual astigmatism at 3-6 months after toric IOL implantation was 0.53 D, ranging from 0.18 to 0.77 D [4], which is similar to our results for the symmetric group. Cases referred to as 'asymmetric astigmatism' in previous studies often had keratoconus, pterygium, or history of corneal surgery, for example [10,[23][24][25], which are typically regarded as irregular astigmatism and cannot be completely corrected. The mean residual astigmatism in these cases was 0.87 ± 1.10 D, ranging from 0 to 5.50 D, which is higher than our result for the asymmetric group. This discrepancy arises because we focused on a specific type of asymmetric astigmatism, asymmetric bowtie astigmatism, in which the power on both sides differs, although the orientations are same. Therefore, there are symmetric components in this type of asymmetric astigmatism. We investigated whether the astigmatism in these patients could be corrected with toric IOLs. We found that the postoperative residual astigmatism was significantly reduced, from 2.18 ± 0.70 D to 0.69 ± 0.41 D, confirming that cataract surgery with toric IOL implantation is effective in correcting asymmetric bowtie astigmatism, with better results than those reported for irregular astigmatism in previous studies [10,[23][24][25]. However, irregular astigmatism may have more irregular aberrations that cannot be corrected by toric IOLs and lead to a greater residual astigmatism. As expected, the error between the actual and predicted postoperative residual astigmatism was greater in the asymmetric group, and overcorrection was more frequent in these eyes. A possible explanation is that, if the power of the toric IOL was simply calculated based on the available biometric data, which are largely affected by the power of the larger semi-bowtie, the astigmatism may be overestimated. A lot of work has been carried out to reduce residual refractive error after cataract surgery. Accurate assessment of the ocular biometrics plays an important role in these efforts. Many studies compared the accuracy of different methods to measure astigmatism. A study compared the keratometry measurements obtained from IOLMaster and Pentacam and showed that IOLMaster had superior performance in prediction of postoperative astigmatism, which is different from ours [26]. Another study aimed to compare the effect of corneal irregularity on astigmatism assessment using IOLMaster and Pentacam. They found corneal irregularities could significantly impact astigmatism assessment by both instruments and Pentacam (TCRPs) was more accurate in predicting postoperative residual astigmatism in highly irregular corneas [27]. Tana-Rivero et al. compared the ocular parameters using three different devices (ANTERION, IOL Master 700, and Pentacam) and found no significant difference in Ks and Kf [28]. The difference may due to difference of the patients, devices, and ocular conditions. Interestingly, we found that among the four measurement modes tested, the Pentacam WFA mode showed the greatest accuracy for asymmetric bowtie. Asymmetric bowtie astigmatism can be seen as a combination of symmetric bowtie and coma, and only the former can be corrected by a toric IOL. The Hartman-Shack aberrometer is used in Pentacam to capture wavefront data at multiple locations [29]. The Hartman-Shack aberrometer has better repeatability than other sensor technologies [30]. Pentacam WFA measures the regular part of the corneal astigmatism based on the second-order aberration of the astigmatism of the total cornea, whereas other measurement modes, especially the Pentacam TCRP mode, include the coma aberration of the larger semi-bowtie, which may overestimate the magnitude of the corneal astigmatism. We also found that, as the corneal asymmetry increased, the accuracy of the WFA mode in calculating the toric IOL decreased. Based on a computational model, previous studies have estimated that the refractive effect of a root-mean-square higher-order aberration of ≥0.43 µm is equivalent to a spherical error of ≥0.5 D for a 5 mm pupil aperture [31,32]. Another study showed that the vertical and horizontal coma of the cornea influence the refractive astigmatism in different ways [33]. Therefore, in cases with a higher-order aberration of the cornea of ≥0.43 µm or in corneas with large coma aberrations, their effects should be taken into consideration in the calculation of toric IOL. There are also some limitations in our study. Firstly, our research is a retrospective, single-center study. Multicenter, large sample, and prospective clinical studies are needed to further confirm the results. Additionally, the effect of astigmatism axis position and bowtie size should be analyzed using a large cohort. Secondly, total corneal keratometry could also be measured by IOLMaster 700. It may provide more accurate measurement for asymmetric bowtie corneal astigmatism and further studies need to be conducted in this regard. In conclusion, toric IOL can be used to correct asymmetric bowtie corneal astigmatism. Corneal astigmatism measured with the Pentacam WFA mode may be a better choice for surgical planning in these cases. However, its accuracy may decrease as the corneal asymmetry increases.
2023-03-01T16:13:48.927Z
2023-02-24T00:00:00.000
{ "year": 2023, "sha1": "5bbce2e9d280107c898c753417aef1d2f20a1459", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4426/13/3/401/pdf?version=1677239969", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e7761e21fc7b203b049c099b9c90349ddf85ecf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
20818571
pes2o/s2orc
v3-fos-license
GenProBiS: web server for mapping of sequence variants to protein binding sites Abstract Discovery of potentially deleterious sequence variants is important and has wide implications for research and generation of new hypotheses in human and veterinary medicine, and drug discovery. The GenProBiS web server maps sequence variants to protein structures from the Protein Data Bank (PDB), and further to protein–protein, protein–nucleic acid, protein–compound, and protein–metal ion binding sites. The concept of a protein–compound binding site is understood in the broadest sense, which includes glycosylation and other post-translational modification sites. Binding sites were defined by local structural comparisons of whole protein structures using the Protein Binding Sites (ProBiS) algorithm and transposition of ligands from the similar binding sites found to the query protein using the ProBiS-ligands approach with new improvements introduced in GenProBiS. Binding site surfaces were generated as three-dimensional grids encompassing the space occupied by predicted ligands. The server allows intuitive visual exploration of comprehensively mapped variants, such as human somatic mis-sense mutations related to cancer and non-synonymous single nucleotide polymorphisms from 21 species, within the predicted binding sites regions for about 80 000 PDB protein structures using fast WebGL graphics. The GenProBiS web server is open and free to all users at http://genprobis.insilab.org. INTRODUCTION Sequence variants that occur in coding regions of genes and alter protein's amino acid sequence presumably affect protein function. Variants can occur in genes of somatic cells, for example mis-sense mutations in cancers or germline cells, such as non-synonymous single nucleotide polymor-phisms (nsSNPs). The latter can either substitute amino acids (mis-sense SNPs) or introduce premature stop codons, or nonsense codons resulting in incomplete proteins (nonsense SNPs) (1). Non-synonymous SNPs affect phenotypic diversity, disease development and response to drugs. Both somatic and germline sequence variants have been linked to various cancers (2) and other diseases (3). Sickle-cell anemia is a classic example of a disease caused by a single nsSNP, where a glutamic acid residue is replaced by valine in hemoglobin (4). Binding sites on proteins interact with various ligands and hence govern the biochemical functions of proteins. It was found that disease-causing nsSNPs are preferentially located at protein-protein interfaces rather than in non-interface regions of protein surfaces (5). Significant enrichments of somatic mis-sense mutations were found within protein-protein, protein-nucleic acid and proteinmetal ion binding sites in several proteins involved in tumorigenesis (6). As such, binding site sequence variants are of great interest to drug development chemists and clinicians who seek to predict an individual's response to a drug. A variety of algorithms, web servers and databases have been developed to identify nsSNPs which influence protein function (7)(8)(9) and response to drugs (10). Mapping of nsS-NPs to Protein Data Bank (PDB) (11) protein structures has been accomplished for human proteins (11)(12)(13)(14)(15) as well as for both human and non-human proteins (16) but to our knowledge, mapping of somatic mutations and nsSNPs from many different species to diverse types of binding sites and further, to each site's ligand specifically for all PDB protein structures, does not exist. Detection of protein binding sites is a challenging task. Proteins typically bind several different ligands, but any single protein structure in the PDB only contains one or a few co-crystallized ligands and thus shows an incomplete state of the actual binding sites. To finesse this problem, we define binding sites on proteins using the ProBiS-ligands approach (17), which has been improved in GenProBiS. This accounts for the co-crystallized ligands from the same binding site, as well as for the ligands binding to similar binding sites in W254 Nucleic Acids Research, 2017, Vol. 45, Web Server issue other PDB structures. The approach detects and aligns similar binding sites irrespective of their proteins' similar folding patterns using the ProBiS algorithm (18). In this algorithm, protein structures are represented as graphs, in which vertices represent functional groups of surface amino acids and edges are drawn between pairs of vertices that are <15 A apart. Two protein graphs are divided into several subgraphs that together completely sample the two protein surfaces. From each pair of protein subgraphs, a product graph is constructed, i.e. an approximate representation of all possible local superimpositions of the two protein structures. Using our maximum clique algorithm (19), the largest complete subgraph is detected within each product graph, which corresponds to the best local superimposition of the two compared protein structures. Ligands, co-crystallized in the superimposed similar binding sites, are then transposed to the query protein based on this superimposition. The transposed ligands are clustered by their spatial proximity and each such cluster represents one binding site. Finally, degrees of structural evolutionary conservation are calculated for each query protein's amino acid residue from the multiple protein structure alignment (18). Recently, a variation of this approach was successfully used for discovery of smallmolecule inhibitors of InhA enzyme in Mycobacterium tuberculosis and this resulted in identification of three previously unrecognized inhibitors with novel scaffolds (20). In this paper we describe a new web server, GenProBiS, which allows mapping of human somatic mis-sense mutations related to cancer and nsSNPs from genome sequences of 21 species to protein binding sites in the PDB. The concept of a binding site is understood in the broadest sense, which includes glycosylation and other post-translational modification sites. These are in GenProBiS classified under protein-compound binding sites. Binding sites are defined as the space occupied by atoms of all co-crystallized ligands transposed to the query protein from PDBs sharing similar binding sites with the query protein. Binding site grids are generated and visualized as solvent accessible molecular surfaces. GenProBiS enables detection of sequence variants within a protein binding site and visual exploration of interactions, or loss of interactions, of a specific mis-sense mutation with a specific ligand. We show the usability of GenPro-BiS on selected disease-related nsSNPs and somatic mutations whose importance in disease development and potential drug response effects can be explained by their presence in binding sites and their interactions with ligands. GENPROBIS WEB SERVER The GenProBiS web server implements a novel approach to the discovery of sequence variants that have potentially deleterious effect on protein function and ligand binding through gain or loss of the binding site ( Figure 1). Currently, the web server maps around 550 000 sequence variants to about 5 million amino acid residues in 80 000 PDB protein structures enriched with protein-protein, proteinnucleic acid, protein-compound and protein-metal ion binding sites. The sequence variants were collected from the UniProt variants dataset (21), which contains data from various databases including around 95 000 somatic missense mutations from human cancers from the COSMIC database (2) (22), around 500 polymorphisms from six plant species, Zea mays, Yitis vinifera, Sorghum bicolor, Solanum lycopersicum, Phytophtora infestans and O. sativa from the EnsemblPlants (23) and around 60 polymorphisms from Aedes aegypti, Ixodes scapularis and A. gambiae species obtained from the EnsemblMetazoa database (23). UniProt amino acid sequence locations of sequence variants were converted to PDB structure locations using the Structure integration with function, taxonomy and sequence (SIFTS) project conversion table (24). Binding sites were predicted by local structural comparisons of whole protein structures using the ProBiS algorithm (18) and transposition of ligands from the similar binding sites found to the query protein using an updated ProBiS-ligands approach (17) with the following major improvements introduced in GenProBiS: i) Protein, nucleic acid, compound and metal ion binding sites and ligands are predicted for ∼300 000 protein chains in the PDB. The original ProBiS-ligands approach only enabled prediction of ligands for the 42 000 protein chains in the 95% non-redundant PDB. ii) Predicted protein or nucleic acid ligands that severely clash with the query protein, i.e. have >10 atoms <1.0 A from any query protein atom, are now discarded. iii) The cutoffs for binding site similarity scores (z-scores), originally 1.0 for all ligand types, are now 2.5 for compounds, 3.0 for proteins, 3.0 for nucleic acids and 2.0 for metal ions. While binding site z-scores and wholesequence identities are not directly comparable, a zscore of 2.0 in GenProBiS, as a rule of thumb corresponds to ∼30% sequence identity. iv) Ligands have been clustered by their spatial proximity using OPTICS algorithm (25), each cluster containing from a single to hundreds of ligands and representing one binding site, where the measure of distance is now their minimum distance between any two atoms; in an earlier approach we used distance between geometric centers of ligands, which did not cluster protein and nucleic acid ligands well. v) Biologically relevant ion and compound ligands are identified using the list of non-specific binders and known crystallization artifacts at http://insilab.org/ files/GenProBiS/non-specific.txt. Additionally, ions that belong to clusters with <10 members are considered artifacts. vi) Binding site grids are now generated as hexagonal close-packed grids with a resolution of 1.5Å, encompassing the space occupied by atoms of predicted clustered ligands, where grid points had to be <4Å from any predicted ligand's atom and <8Å from any query protein atom. vii) Protein residues <3Å from any grid point are considered as binding site residues. Nucleic Acids Research, 2017, Vol. 45, Web Server issue W255 Figure 1. The GenProBiS web server approach depicted on example of nsSNPs mapping to a compound (low molecular weight ligand) binding site. viii) A residue and a ligand are considered to interact if the distance between any of their atoms is <5Å. Solvent accessible surfaces of binding site grids, which are visualized in the GenProBiS web server, have been precomputed using an in-house algorithm. Structurally mapped somatic mutations and nsSNPs were then assigned to one or more binding sites, and were labeled according to the binding site's ligand type (protein, nucleic acid, compound or ion) and the number of the binding site. To facilitate high-speed access to the binding sites, we precomputed protein binding sites for all protein structures in the PDB, i.e. around 300 000 combinations of PDB and Chain IDs. This binding site prediction across the entire PDB was computationally intensive. It was completed in about 2 months using 1400 CPUs. Future updates of the database will require considerably less time (about a week on a single CPU) since only the difference between the initial and the updated PDBs will need to be recomputed. INPUT GenProBiS requires as input the PDB and Chain ID (11). It also can use dbSNP's reference SNP cluster (rs) ID (22), COSMIC's Mutation ID (2), Uniprot ID or Uniprot's Gene Symbol (21). The basic input is a protein structure (PDB and Chain ID) and when these are entered, clicking the Search button takes the user directly to the results page. Alternatively, one may enter dbSNP's rs ID, COSMIC's Mutation ID, UniProt ID or Uniprot's Gene Symbol and then the Conversion tool opens and displays the list of PDB protein structures corresponding to the input. A user can then choose a specific structure for further exploration. Using the Custom input link the user can also upload a list of custom variants with UniProt sequence positions and chooses the PDB structure to which they are to be mapped. OUTPUT GenProBiS maps sequence variants to protein binding sites for the given query protein (Figure 2). The server allows intuitive visual exploration of mapped sequence variants within the predicted binding site regions using WebGL graphics implemented in the Molmil molecular viewer (26). Molmil allows visualization of large proteins and their multiple ligands in an internet browser. Users can explore threedimensional (3D) poses of all the transposed ligands within the same query binding site and their potential interactions with mis-sense mutations, a feature not available elsewhere. GenProBiS results page has a Vertical Menu on the left side and the remainder of the browser window is the 3D viewer . Upon clicking on any of its main links, the vertical menu expands to display tables with sequence variants, binding site and ligands mapping data. Above, there is a camera icon which allows the user to save the current state of the 3D Viewer as a PNG picture; a play icon to open the Ligand Player (discussed in the Table of Ligands section below); a download icon to save mapping of sequence variants to binding sites as a text file; and a link icon to the Evolutionarily Conserved Regions (ECR) genome browser (27) that allows exploration of alignments of the query protein's gene with certain different species. Below these icons, the main links are as follows. Table of sequence variants Sequence variants that are within and outside the predicted binding sites are listed in this table in which each row contains: (i) three circular buttons to show (S), label (L) or zoom in (Z) on the sequence variant (e.g. nsSNP) as a stick model on the query protein structure in the 3D Viewer ; (ii) description of the amino acid change, for example Asn78Ser indicates that asparagine changes to serine at the 78th position in the protein sequence according to the UniProt sequence numbering; (iii) if a sequence variant is in one or more binding sites, this is shown as one or more small circles, whose colors indicate the ligand type--brown for metal ions, green for compounds, yellow for proteins and blue for nucleic acids. A number inside each circle is the binding site number; (iv) the variant's accession number, which also serves as an http link that allows exploring the sequence variant in its original database; (v) where available, links to various annotation databases, such as ClinVar (3) and PharmGKB (10). Table of binding sites Protein binding sites and sequence variants that are associated with each binding site are listed in this table. Binding sites can be selected according to their ligands' types with buttons labeled Compound , Ion , Nucleic , and Protein , and binding site numbers within each ligand type. Selecting a binding site results in a table with its mapped sequence variants. The Sticks and Surface buttons above this table allow each binding site to be displayed either as sticks or surface models, the latter being the default view. Table of ligands Selecting a binding site according to its type and number, prompts a display of a table of its corresponding ligands. Each ligand (or several ligands at once) can be selected using (S) button, resulting in ligands' 3D structures being displayed in the query protein in the 3D Viewer . Interactions of ligands with sequence variants can be seen by clicking the (I) button, which opens a table listing the minimum distances between all the ligands with the same name and the sequence variant residues. Clicking on a row in this table zooms in and shows the corresponding interaction as a line in the 3D Viewer . In the Ligand column is the name of the ligand (its PDB code or Ligand ID), which is also an http link to the ligand's PDB web page. The Count column provides the number of ligands with the same PDB code or Ligand ID. Clicking the Ligand Player near the top, opens a small console on the right side of the screen with play , forward , backward and stop buttons which allows the user to browse through the ligand's predicted 3D poses one by one. This allows the user to visually examine interactions as lines between ligands and variant amino acids and determine potential gain or loss of interactions, allowing for estimation of the impact of a sequence variant on protein's function and ligand binding. Sequence viewer Sequence Viewer allows the user to see, as an alternative to the structural view, PDB protein sequences annotated with binding sites, sequence variants, and degrees of structural evolutionary conservation (Figure 3). The degrees of structural conservation, calculated from multiple structure alignments with ProBiS algorithm (18), often indicate the position of binding sites or other functionally important sites. Three-dimensional viewer Most of the browser window is the 3D structural viewer, which initially displays the query protein as a cartoon model with one of the protein binding sites shown as the solvent accessible molecular surface. Mapped sequence variants are ball-and-stick models, variants that are outside the currently selected binding site are purple and binding site variants are red (Figure 2). On the right side is a draggable menu with the PDB and Chain ID of the query protein that allows different coloring schemes and styles to be applied to the query protein, display crystal waters, co-crystallized ligands and hydrogens, and allows the structure to be refocused in the center of the screen. CASE STUDY 1: NSSNP AND SOMATIC MUTATION EFFECTS ON INHIBITOR BINDING Indoleamine 2, 3-dioxygenase (IDO1) is an enzyme that catabolizes tryptophan and has been demonstrated to have an immunosuppressive role (28). It is a validated oncotarget and is thought to be involved in one of the possible mechanisms by which cancer cells evade immune response. Developed inhibitors of this enzyme, aside from binding to heme, form several key interactions with binding site amino acids, for example, Phe163, Phe226 and Arg231 (29). Using Gen-ProBiS with the IDO1 query protein structure (4pk6A), we identified two of these amino acids, Phe163 and Arg231, to be polymorphic (red sticks, Figure 2B). To analyze the effects of these polymorphisms on inhibitor binding, we used the Ligand Player console ( Figure 2B) to browse through all the available co-crystallized inhibitors (listed in the table in Figure 2A). We discuss the recently developed imidazothiazole derivative inhibitors with PDB's Ligand IDs PKJ and PKL (29): i) rs764150078 (Phe163Ser) results in loss of favorable pipi interaction of the imidazothiazole ring with pheny-lanine and reduces binding of both PKJ and PKL inhibitors ( Figure 2B). ii) rs774225205 (Arg231Cys) and rs745677091 (Arg231Le u and Arg231His) delete favorable electrostatic interactions of arginine with inhibitor PKJ at the entrance to the binding site cavity. iii) COSM187719 (Arg231Cys) is a somatic mutation that results in loss of electrostatic interactions with PKJ inhibitor and could lead to drug resistance during cancer therapy with this inhibitor ( Figure 2B). These polymorphisms are likely to result in reduced effectiveness of the inhibitors and should be considered in the design of future inhibitors and in their potential clinical usage. CASE STUDY 2: SOMATIC MUTATION IN P53 LINKED TO GLIOBLASTOMA MULTIFORME Glioblastoma multiforme is a most aggressive and malignant subtype of human brain tumor. Variant rs121913343 in the TP53 gene was found in tumor tissue of patients with glioblastoma and was linked to tumor growth (30). The TP53 gene encodes for the tumor suppressor protein p53, which plays an essential role in preventing cancer. Using the gene symbol TP53 as the query (the structure chosen in the Conversion Tool was 1gzhC), we observed that the mutation Arg273Ser corresponding to rs121913343 occurs in a nucleic acid binding site for BAX response element (Figure 3) (31). We postulate that the replacement of arginine by serine vitiates the salt bridge interaction of the arginine with DNA's phosphate group. This weakens the p53-DNA interaction and decreases the tumor suppression activity of p53. The importance of this finding could be experimentally tested by comparing the stability of the wild-type to that of the mutated p53-DNA complex. CASE STUDY 3: INTERPRETATION OF GENOME-WIDE ASSOCIATION STUDIES Serum concentration levels of intercellular adhesion molecule 1 (ICAM-1) have been associated with diverse conditions. In a genome-wide association study, several nsSNPs, including rs1799969, were associated with lower solubility of this protein in plasma (32). Using the rs1799969 as the query (the structure chosen in the Conversion Tool was 1p53B), we suggest that the decreased solubility may be due to this mutation disrupting glycosylation of this protein. Glycosylation has been shown to increase solubility of proteins (33) and indeed, rs1799969 which describes the change Gly241Arg occurs in the N-glycosylation site (binding site #3) on ICAM-1. The substituted arginine (UniProt location: 241; PDB residue ID: 214) could form a salt bridge with the nearby aspartate (UniProt location: 268; PDB residue ID: 241) belonging to the N-glycosylation sequon Asn-Asp-Ser, thereby changing its structure and preventing glycosylation of ICAM-1. This result offers an alternative explanation for the effect of this polymorphism on the solubility of ICAM-1, which was previously thought to be due to the weakened binding to integrin MAC-1 (34). CONCLUSION GenProBiS is a web server designed for detection and 3D visualization of sequence variants such as somatic mis-sense mutations and nsSNPs in protein binding sites. Binding sites and their ligands are predicted with no prior knowledge of binding sites, but based on detected local structural similarities in proteins and transposition of ligands between protein structures irrespective of protein folding. GenProBiS allows suggestion of functional effects of mutations on ligand binding and as such represents a key tool in both drug discovery and personalized medicine. The results of the GenProBiS web server could enable focused laboratory experiments based on targeted hypotheses in several re-search fields including human, veterinary medicine, animal and plant breeding.
2018-04-03T03:08:33.635Z
2017-05-11T00:00:00.000
{ "year": 2017, "sha1": "9656fe936a7136519f50199c0c7d66e475ff1fa8", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/45/W1/W253/18137624/gkx420.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9656fe936a7136519f50199c0c7d66e475ff1fa8", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
221903627
pes2o/s2orc
v3-fos-license
Antimicrobial metal-based nanoparticles: a review on their synthesis, types and antimicrobial action The investigation of novel nanoparticles with antimicrobial activity has grown in recent years due to the increased incidence of nosocomial infections occurring during hospitalization and food poisoning derived from foodborne pathogens. Antimicrobial agents are necessary in various fields in which biological contamination occurs. For example, in food packaging they are used to control food contamination by microbes, in the medical field the microbial agents are important for reducing the risk of contamination in invasive and routine interventions, and in the textile industry, they can limit the growth of microorganisms due to sweat. The combination of nanotechnology with materials that have an intrinsic antimicrobial activity can result in the development of novel antimicrobial substances. Specifically, metal-based nanoparticles have attracted much interest due to their broad effectiveness against pathogenic microorganisms due to their high surface area and high reactivity. The aim of this review was to explore the state-of-the-art in metal-based nanoparticles, focusing on their synthesis methods, types, and their antimicrobial action. Different techniques used to synthesize metal-based nanoparticles were discussed, including chemical and physical methods and “green synthesis” methods that are free of chemical agents. Although the most studied nanoparticles with antimicrobial properties are metallic or metal-oxide nanoparticles, other types of nanoparticles, such as superparamagnetic iron-oxide nanoparticles and silica-releasing systems also exhibit antimicrobial properties. Finally, since the quantification and understanding of the antimicrobial action of metal-based nanoparticles are key topics, several methods for evaluating in vitro antimicrobial activity and the most common antimicrobial mechanisms (e.g., cell damage and changes in the expression of metabolic genes) were discussed in this review. Review Introduction In the last decades, the search for new antimicrobial substances against microbial contamination has been the focus of many research fields, in public and private research centers, in order to reduce nosocomial infections and foodborne diseases. The elimination of pathogenic microorganisms, such as bacteria, fungi, and yeast, in order to avoid health issues has been a major goal in these fields. There are two terms that can define the antibacterial efficiency of a given compound: An agent is considered "bacteriostatic" if it delays the bacterial growth, maintaining the initial growth phase for a longer period of time. An antibacterial agent can also be "bactericidal" if it completely inhibits and kills the bacteria. However, the bacteriostatic and bactericidal effects exhibited by the compounds during in vitro experiments depend on several factors, including bacterial density, test duration, growth conditions, and the reduction in bacteria concentration. For these reasons, in many studies the compounds are better described as substances with excellent antibacterial properties, since they can exhibit both effects [1]. Furthermore, the antibacterial effectiveness of most compounds differs depending on the type of bacteria exposed to these compounds. Gram-positive and Gram-negative bacteria, for example, are categories widely studied due to their different cellular structure which might affect the antimicrobial effectiveness of a given compound. Gram-positive bacteria have a thicker peptidoglycan layer, whereas Gram-negative bacteria contain a thin peptidoglycan layer and an outer membrane [2]. The presence of mold and yeast, mainly in food sources, has also attracted research interest [3,4]. Although several solutions have been proposed, microorganism incidence will continue to increase and will remain a complicated challenge to overcome. Due to the reoccurrence of infections, microorganisms have become resistant to antibiotics as a result of inherent genetic changes caused either by misuse or excessive use of drugs and antimicrobial agents, which significantly impacts the public health system. Thus, the research and development of a new generation of innovative and effective antimicrobial agents have become an urgent need. In this search, the scientific community has been focusing on the study of nanomaterials, mainly metalbased nanoparticles (NPs), to test their antimicrobial properties and feasibility to eradicate contamination sources and diseases [5]. The chemical, physical, and biological properties of NPs have been improved on the nanometer scale regarding their surface area, size, distribution, and morphology [6][7][8]. Research evidence shows that antimicrobial properties clearly depend on the synthesis method used to obtain the NPs. These synthesis procedures can be classified into physical, chemical, and biological methods [9]. In general, physical methods have the highest economic and energetic costs [10]. Therefore, research has leaned towards chemical synthesis methods which are able to produce a large number of NPs in a shorter period of time [11]. The research in this area, especially the "green synthesis" methods, has undoubtedly received significant attention regarding their low environmental impact compared to other procedures [12,13]. There are different routes in which the green synthesis methods are applied through the use of microorganisms and plants in a safe, efficient, and profitable manner [14]. Different types of metal-based NPs have demonstrated antimicrobial activity over the last years. Several metal and metal oxide NPs, such as silver, copper, zinc oxide, titanium oxide, copper oxide, and nickel oxide NPs, are known to display antimicrobial activity [15][16][17] that depends on their composition, surface modification, intrinsic properties and the type of targeted microorganism [18]. A special category of metallic NPs is superparamagnetic iron-oxide nanoparticles (SPIONs) (e.g., magnetite (Fe 3 O 4 ) and maghemite (γ-Fe 2 O 3 ) NPs) whose antimicrobial activity increases upon the application of an external magnetic field [19]. An interesting strategy to increase the antimicrobial efficiency of metal-based nanoparticles is the use of silica and carbon compounds as delivery systems [20]. The broad range of metal-based nanoparticles, the types of NP synthesis, and their antimicrobial activity were further explored in this review. Different methods to analyze the efficiency of the antimicrobial activity of metal-based NPs have also been discussed [21,22]. In addition, some particular NP antibacterial mechanisms that affect the different essential structures of the microorganisms were discussed, such as the induction of oxidative stress, the release of metal ions and the non-oxidative damage. Synthesis of antimicrobial nanoparticles Over the last years, techniques for synthesizing antimicrobial nanoparticles have advanced significantly due to their use in both biomedical and industrial applications. The properties of nanoparticles strongly depend on the synthesis technique used, which determines both the morphology and size of the NPs. As mentioned earlier, the synthesis methods can be grouped into physical, chemical, and biological (also called green synthesis) methods, which will be discussed in the next subsections. Physical methods Examples of physical methods used to synthesize NPs are the evaporation/condensation method, magnetron sputtering, mechanochemical processing (MCP), microwave-thermal method, photoreduction process, and pulsed laser ablation, among others. [26]. The microwave-thermal method [27][28][29][30] allows small particles to be obtained with a narrow size distribution from different materials in a fast, safe, and environmentally friendly way. This technique allows for the synthesis of Ag [27], CuO [28], and MgO [29] NPs with sizes ranging from 1 to 25 nm. In addition, with this technique it is possible to control the geometry of the nanoparticles in order to obtain squared and polyhedral-shaped nanoparticles, for example, without compromising their size [30]. Another interesting technique used is the very slow ultraviolet irradiation photoreduction process in which the morphology of the nanoparticles can be controlled based on the cation concentration and the irradiation time. For example, while Tan et al. [31] obtained spherical silver nanoparticles, Zhou et al. [32] obtained plate-like triangles. Another method used is the pulsed laser ablation technique which is used to synthesize colloidal solutions of Ag [33], Au [34], MgO [35], and ZnO [36] NPs, among others, via a high-power pulsed laser beam that hits a target of the material of choice. In this context, several physical methods have been used to synthesize nanoparticles, and the most relevant ones, along with the typical resulting particle sizes, are listed in Table 1. Depending on the preparation conditions, the size, length, and diameter of the nanostructures can be adjusted in order to control the physical properties of the NPs. Chemical methods A few examples of chemical methods that have been used to synthesize nanoparticles are the atomic layer deposition method, chemical reduction method, chemical vapor deposition, electrochemical anodization method, hydrolysis, hydrothermal method, precipitation-hydrothermal method, reverse micellar route, sol-gel method, solution-based synthesis, solvothermal synthesis, and the sonochemical method. The most relevant ones, along with the typical resulting particle sizes, are listed in Table 2. The atomic layer deposition method is employed to grow metal oxide and metallic three-dimensional nanostructures using porous alumina membranes [41], electrostatically spun nanofibers [39,40] or electrosprayed spherical particles [38] as templates. As Figure 1 shows tetrakis(dimethylamide) titanium and water, as precursors, on polymeric structures obtained via electrospinning. The resulting hollow nanotubes and nanospheres had thickness values of approximately 20 and 17 nm, respectively [38,39]. ALD has been recognized as a key technique used to deposit thin films on structures with complex geometries, allowing for the synthesis of nanostructures without shadowing effects and with a high aspect ratio, such as nanotubes with diameters ranging between 80 and 180 nm and length values that can reach several tens of micrometers. The chemical reduction method was initially proposed by Michael Faraday in 1856-1857 while investigating the properties of colloidal gold. This method generally uses a precursor, a reducing agent, and a stabilizer or protective agent [42]. Occasionally, a catalyst can be added to accelerate the reaction, as well as a solvent, which can favor the interaction of the chemicals. As an example, we highlight the work of Wang et al. in which well-dispersed spherical nanoparticles with sizes ranging from 20 to 80 nm were synthesized [42]. The chemical vapor deposition method is a technique in which the substrate is exposed to one or more volatile precursors, which react to and/ or decompose on the substrate surface to produce the desired thin film deposit. For example, Zhao et al. [44] obtained graphene-wrapped Ag nanowires using the chemical vapor deposition method in order to investigate their broad-spectrum and robust antimicrobial properties. The cryochemical synthesis method includes a simultaneous evaporation of a metallic and a volatile component (e.g., an organic monomer), followed by co-condensation of the vapors on the cold surface of the vacuum reactor. Sergeev et al. [45] obtained Ag nanoparticles with sizes ranging from 20 to 150 nm using this technique. The electrochemical anodization method is based on the reactions that occur between the electrode and the electrolyte. In this method, electricity is used as the driving or controlling force. The main advantages of using electrochemical techniques include using vacuum-free systems, simple operation procedures, high flexibility, low cost, low contamination (pure product), and the fact that it is an environmentally friendly process. In addition, with this technique one can control the morphology of the synthesized nanoparticles. For example, while Johans et al. [46] obtained spherical Ag nanoparticles, Galstyan et al. [47] synthesized nanoparticles in the form of elongated aggregates with a chain-like morphology. The hydrolysis technique, which involves the reaction of an organic chemical with water, has been used to obtain ellipsoidal monocrystallites of CeO 2 with an average size of ≈7 nm [48,49]. The reverse micelle technique, also known as the microemulsion technique, is used to synthesize nanoparticles of various materials with different morphologies and sizes. For example, the technique has been used to obtain ultrafine MgO nanoparticles (8-10 nm) [50], TiO 2 nanoparticles (10-20 nm) [51], and even flower-like microstructures (diameter ≈6 µm) and microtubes (diameter ≈1 µm and length ≈4 µm) [52]. The sol-gel technique has been widely used to synthesize nanoparticles since it is a simple and relatively fast technique. With this technique it is possible to fabricate TiO 2 NPs smaller than 10 nm [56,57] and MgO NPs with sizes ranging from 35 to 50 nm [53,54]. Furthermore, it has been used in the synthesis of N-Ag-TiO 2 -ZnO nanocages with diameters ranging from 300 to 500 nm [55]. The solvothermal synthesis method is a technique used to prepare a variety of materials, such as metals, semiconductors, ceramics, and polymers. In this process, the chemical reaction takes place in a sealed vessel where solvents are brought to a temperature well above their boiling point, facilitating the interaction of the precursors during synthesis. Some examples are Y 2 O 3 [58] and ZnO [59] nanoparticles with different morphologies and sizes. When water is used as the solvent, the method is called the hydrothermal technique, which is an easy and convenient method for growing NPs. By varying the synthesis parameters, a variety of nanostructures, such as spherical CeO 2 NPs [61], lamellar MgO NPs [62], and even ZnO nanorods [69] can be obtained by using this method. In the sonochemical synthesis method, a high-intensity ultrasound produces an acoustic cavitation that can be used for the production or modification of a wide range of nanostructured materials. Some examples are spherical Ag [63] and CuO NPs [65], and square-shaped CeO 2 NPs [64]. The main advantage of this technique is the simplicity in maintaining the operating conditions (ambient conditions) and controlling the NP size. Green synthesis Generally, the procedures for obtaining nanoparticles are physical and/or chemical methods in which a precursor material reacts with reducing agents, as mentioned earlier. Nevertheless, both methods have been proven to be harmful for the environment and living organisms [70,71]. Moreover, the physicalbased synthesis methods require expensive equipment, high temperature and high pressure, which makes it an unprofitable and unscalable method [10]. On the other hand, the chemical methods use and generate toxic chemicals that can cause dangerous effects to the environment, in addition to being cytotoxic and carcinogenic [72]. Several toxic chemicals adhered to the particles synthesized through these methods have been iden-tified [13,73]. For these reasons, an interest in environmentally friendly nanoparticle synthesis methods, also called "green synthesis" or "nanoparticle biosynthesis" methods, has arisen. In addition to their ecologically friendly nature, these techniques also present a higher performance, less energy costs (temperature and pressure), and they are profitable, biocompatible, safe, and easy to expand on a larger scale [72,74]. With the advance of nanotechnology, the number of antibioticresistant bacteria has also increased, including strains that are resistant to more than 100 different types of antibiotics [75]. This problem, in addition to the environmental concerns, has inspired many researchers to develop ecologically friendly biosynthetic nanoparticles as antimicrobial agents. Several ecological routes have been investigated, focusing on the search for natural resources. Methods based on the biological synthesis of nanoparticles through the usage of plant extracts [76,77], raw materials from fruits and vegetables [78], algae [79], bacteria, fungi [80], and residues [81] have been reported. The products obtained from these methods are called biogenic NPs [82], whereas the biological organisms used in these processes are called biological nanofactories, which can release proteic substances that are able to chemically reduce metal ions [83]. It is important to highlight that all the biological components have a unique chemical structure, their own metabolic pathways, and different responses depending on the metallic ion used to control and modify the synthesis of NPs. The green synthesis of NPs can occur both intracellularly or in the extracellular milieu, where the biomolecules released from the cells are located. As an example of the latter, Pseudomonas strutzeri bacteria can successfully generate Ag NPs extracellularly [84]. Conversely, the bioreduction of iron followed by the precipitation of an oxide, which is subsequently transformed into FeO NPs, occurs through an intracellular synthesis pathway since the bacteria or fungi carry the ions to the intracellular space [85]. The use of plants presents some advantages over other production sources since phytochemicals can act as protecting and stabilizing agents, eliminating an additional step to prevent particle aggregation [86]. In addition, cell culture procedures are not necessary in this case, which allows for the large-scale synthesis of nanoparticles in a non-aseptic environment [87]. Furthermore, plant-based processes are cost-effective and safe for humans and the environment [10]. Different parts of the plants can be used in the green synthesis of NPs. For example, spherical copper NPs (≈5-20 nm) were obtained by using Curcuma longa tuber extract and copper acetate dehydrate. These Cu NPs demonstrated excellent antibacterial activity against Gramnegative bacteria (inhibition zone diameter of E. coli: 22 ± 0.86 mm) and Gram-positive bacteria (inhibition zone diameter of B. subtilis: 23 ± 0.9 mm) [88]. Bio-reduction of silver nitrate with Parkia speciosa leaf extract generated spherical Ag NPs with an average particle size of 31 nm [89]. A major antibacterial activity against S. aureus was followed by B. subtilis, E. coli and P. aeruginosa. By using latex extracted from an immature Papaya carica fruit and silver nitrate, spherical and highly stable Ag NPs were also obtained. The reduction in Gram-positive bacteria, such as E. faecalis and B. subtilis, was lower than the reduction in Gram-negative bacteria, such as V. cholerae, P. mirabilis, E. coli, and K. pneumonia. ZnO NPs are of great interest because their synthesis is economical, safe and easy [72]. Vijayakumar et al. (2018) investigated the antimicrobial and antifungal effect of spherical ZnO NPs (30 nm) that were successfully synthesized using Atalantia monofila leaf extract [90]. The bactericidal effect against Grampositive and Gram-negative bacteria was evaluated and the highest inhibition values were obtained for B. subtilis (inhibition zone diameter of 20 mm) and K. pneumoniae (inhibition zone diameter of 19 mm). On the other hand, the maximum inhibition zones for fungi were observed against C. albicans (inhibition zone diameter of 24 mm) followed by A. niger (inhibition zone diameter of 18 mm). Over the last years, the replacement of plants by other biological samples, mainly bacteria and fungi, has increased. The antimicrobial activity of CuO NPs obtained from an Actinomycete bacteria in a copper sulfate aqueous solution was tested against a few pathogenic bacteria, such as Staphylococcus aureus, Bacillus cereus, Proteus mirabilis, Edwardsiella tard, Aeromonas caviae, Aeromonas hydrophila and Vibrio anguillarum. The results showed that the highest zone of inhibition was obtained for B. cereus (inhibition zone diameter of 25.3 mm), followed by E. tard (inhibition zone diameter of 22.6 mm), at a concentration of 100 µg/mL [91]. However, the synthesis of NPs using bacteria has several disadvantages, such as the high cost of culture media, the need for microbial screening, long process time, microbial contamination, and lack of control over the morphological characteristics. Due to these disadvantages, the attention towards using fungi to synthesize NPs has increased, and even common food-contaminating fungi, such as white rot fungi, are being considered. Studies have shown that the antimicrobial properties are dependent on the type of fungus used. Gudikandula Types of metal-based antimicrobial nanoparticles Metallic and metal-oxide nanoparticles Since ancient times, metal-based materials, such as silver and copper, have been used as antimicrobial agents by the Egyptians, Persians, Greeks and Romans. Nowadays, due to efforts in nanotechnology, metallic NPs synthesized through different methods have drawn the most attention due to their functionality [95]. Specifically, metallic NPs provide strong and extended antimicrobial activity at smaller dosages against a broad range of microorganisms due to their dimensions and shapes [96]. Table 3 shows some examples of potential antimicrobial metallic NPs. Silver nanoparticles have been considered one of the most interesting antimicrobial metallic NPs due to their high efficiency against bacteria, fungi, and viruses. Their high antimicrobial activity enables use in pharmaceutical, food, fabric, and packaging industries [97][98][99]. Nanda Copper nanoparticles are nanomaterials with good chemical stability, heat resistance, and excellent antimicrobial properties due to a large surface-area-to-volume ratio. Their excellent antibacterial, antifungal, antiviral, and anti-inflammatory properties prompted their application in many areas, such as food On the other hand, metal oxide nanoparticles are inorganic nanomaterials which have also presented relevant antimicrobial properties against several pathogenic microorganisms. Zinc oxide, titanium dioxide, copper oxide, and nickel oxide are the most typical metal-oxide NPs with potential antibacterial, antifungal and antiviral activities [127,128]. These oxides have been applied in the food packaging industry and also in the medical field, as shown in Table 3. ZnO is a semiconductor metal oxide with significant antimicrobial properties that can be further improved when applied as a nanomaterial. ZnO NPs have potential application in food preservation as well as important antibacterial properties against drug-resistant bacteria due to their size, shape and surface-capping agents [129,130]. Emamifar et al. (2010) developed orange juice packages based on low-density polyethylene (LDPE) nanocomposites with ZnO NPs. This packaging material presented a significant reduction in the microbial growth rate of Lactobacillus plantarum for up to 112 days of storage [112]. Star-like ZnO NPs were synthesized by the facile molten salt method and used to prepare synthetic nanocomposites with 2 or 4 wt % of ZnO NP load. Nanocomposites with 4 wt % of ZnO NPs exhibited the best antibacterial activity against Bacillus subtilis and Enterobacter aerogenes bacteria [109]. Titanium dioxide is also an inorganic material that is widely used in several products, including cosmetics and orthodontic composites, due to its excellent whitening, photocatalytic, and antimicrobial properties [131,132]. When the size of titanium dioxide is reduced to the nanoscale (TiO 2 NPs), its photocatalytic property is greatly improved, generating more reactive oxygen species (ROS). ROS damages bacterial cells, DNA chains, and other cellular structures through oxidative stress. Therefore, the use of TiO 2 NPs has been directed towards water disinfection, food packaging in addition to their known use as a UV filter to prevent skin cancer [114]. prevented mold growth on a cheese surface during an antimicrobial assay [3]. Nickel oxide nanoparticles have a multifunctional nature with interesting photocatalytic, electrochemical, and catalytic properties. Furthermore, NiO NPs exhibit anti-inflammatory properties, generating interest in the biomedical field to use these NPs as antibiotics or in cancer treatments [116,133]. NiO NPs synthesized from Eucalyptus globulus leaf extract showed excellent antibacterial activities against E. coli, P. aeruginosa, methicillin-sensitive and resistant S. aureus [117]. Suganya et al. (2018) developed a potent antifungal nanocomposite with NiO NPs against the Aspergillus niger strain. The authors attributed the excellent antifungal properties to the physical process used to internalize the powdered nanomaterial in the fungi cells and also to the chemical process that involved ROS generation [118]. Copper oxide is a metal oxide with the ability to target various bacterial structures and its antimicrobial activity can be further improved on the nanoscale. Due to their excellent properties, CuO NPs have attracted great interest from the healthcare, food packaging, medical, and environmental industries [120,134]. This metal oxide is capable of disrupting the normal function of the cell membrane, changing its permeability and the cellular respiration process [135]. Matsuda et al. (2019) have developed a fluoride-containing ZnO-CuO nanocomposite which inhibited the bacterial growth of Streptococcus mutans, showing a potential use in dental materials [124]. CuO nanorods (110 nm in length and 10 nm in diameter) were obtained by the precipitation process. The antimicrobial studies revealed good antimicrobial activity against E. coli, S. flexneri, and S. aureus cells [123]. Superparamagnetic iron-oxide nanoparticles Superparamagnetic iron oxide nanoparticles are a special class of metal-oxide NPs with magnetic properties and excellent biocompatibility. Their shape, size and magnetic nature enables them to kill microorganisms through the application of an external magnetic field, resulting in an increase of the therapeutic antimicrobial properties, especially when compared to conventional antimicrobial compounds [136]. Ferromagnetic nanoparticles are probably the most known and studied SPIONs. Magnetite (Fe 3 O 4 ) and maghemite (γ-Fe 2 O 3 ) are two crystalline phases of iron oxide that present superparamagnetic properties at the nanoscale (<20 nm). This superparamagnetism is generated due to the reduced size of these nanoparticles which allow for a higher surface-to-volume ratio, increasing the surface of the atoms [19]. In addition, when a magnetic field is applied the magnetic moments of these ferromagnetic FeO NPs become aligned. The surface of the SPIONs can be modified to specifically improve their functionality as antimicrobial compounds by increasing their interaction with the bacterial cells [137]. For example, chemical groups can be grafted onto and metals can be adhered to these NPs. Mahmoudi and Serpoooshan developed silver-ring-coated SPIONs through the coating of monodispersed SPIONs with carboxylated dextran via the ligand exchange method followed by conjugation with ethanediylbis(isonicotinate), which allowed for the chelation of the metal ions. These SPION silver core-shell NPs with clear ligand gaps and magnetic properties have the ability to absorb metallic NPs on their outer surface at a high packing density, which significantly enhances their properties [138]. Another strategy in which SPIONs are used to inhibit and/or reduce microbial incidence in biological and environmental applications is through the application of weak magnetic fields [139]. Park et al. demonstrated a 4 log inactivation of Pseudomonas aeruginosa through local heating created by using a 60 mg mL −1 SPION solution and applying an alternating current for 8 min. Silica-and carbon-derived nanoparticles Over the last years, several studies have revealed that silica nanoparticles are excellent antimicrobial metal-releasing systems due to their high chemical and thermal stability and good biocompatibility [20]. The Si NPs enhance the bactericidal effects of some compounds, mainly metallic systems, against a broad range of microorganisms due to their easy delivery [20,140]. In addition, their surfaces can be easily modified by relatively inexpensive precursors which can increase their efficiency. Bactericidal properties of nitric-oxide-releasing Si NPs against P. aeruginosa, E. coli, S. aureus, S. epidermis, and Candida albicans were studied and the results indicated an increased antimicrobial effectiveness due to a greater amount of nitric oxide released by the Si NPs [141]. In addition, the antimicrobial efficiency of silver and copper nanoparticles have also been improved through the development of Si-Ag and Si-Cu NPs. This achievement is specifically important when NPs present cytotoxic effects. Silver NPs were immobilized on hollow silica nanospheres or nanotubes which increased their antimicrobial activity at lower Ag NP concentrations. This is an effect of the morphology of the tubular hollow structures which present a better retention of Ag NPs [142]. Maniprasad and Santra (2012) developed novel core-shell silica structures containing highly dispersed Cu NPs. The bioavailability of these antimicrobial NPs had lower MIC values against E. coli and B. subtilis than copper hydroxide particles in suspension [143]. Silver carbon complexes with different formulations, including micelles and NPs, have also shown an antimicrobial effect since they inhibit the growth of some specific pathogenic bacteria, such as P. aeruginosa (-), Burkholderia cepacia (-), and Klebsiella pneumoniae (-), and antibiotic-resistant bacteria, such as S. aureus and Acinetobacter baumannii (-) [144]. Antimicrobial action of metal-based nanoparticles Methods for evaluating in vitro antimicrobial activity The in vitro antimicrobial activity of metal-based NPs can be evaluated through several clinical microbiological methods, where the disk diffusion and the broth or agar dilution methods are the main techniques used. The agar disk diffusion method is routinely used to analyze the growth of common microorganisms, such as bacteria, fungi, and yeast, in a rapid manner. As Figure 2 shows, in this method, a standardized concentration of the microorganism is inoculated onto the Petri dish containing the growth culture medium, and filter paper disks with the antimicrobial agents are placed on the agar surface. The Petri dishes are incubated under appropriate growth conditions. The antimi- crobial agent can be spread on the agar plate and it inhibits the microorganism growth by forming disks corresponding to inhibition zones [145]. A standardized methodology in which the measurements of the inhibition zone diameter can be correlated with the minimum inhibitory concentrations of the antimicrobial agents needs to be used in order to obtain reliable results. Thus, specific culture media, various incubation conditions, and interpretive criteria for the inhibition zones are used. As a result, approximate MIC values can be obtained; however, this method cannot distinguish between bactericidal or bacteriostatic effects [146]. The agar diffusion method can also be modified depending on the research study. For example, in order to study the antibacterial activity of Ag NPs (synthesized by using the Papaya carica latex extract as the reducing agent) against different pathogenic bacteria, such as Bacillus subtilis, Enterococcus faecalis, Escherichia coli, Vibrio cholerae, Klebsiella pneumonia, and Proteus mirabilis the agar diffusion method was specifically modified. The modifications included substituting the filter paper disks for wells made in the Petri dish agar which were filled with different concentrations of Ag NPs. After the incubation period, the inhibition zones were measured. This alternative is generally called the agar well diffusion method. The MIC values of Ag NPs against each bacteria were reported and their highest antibacterial effect was against Gram-negative bacteria [10]. A second widely used method to measure antimicrobial activity is the agar (or broth) dilution method. This method consists of preparing a series of plates (or tubes) containing a standardized suspension of the microorganism to be tested into agar (or broth medium), containing various concentrations of the antimicrobial agents ( Figure 3). After incubation under the appropriate conditions, the MIC can be determined and the results can be analyzed using approved cutoff points. When using this technique, the experimental conditions must be carefully controlled in order to achieve reproducible results [147]. This technique is usually combined with the dynamic contact methodology (ASTM E2149-10 directive) in which different NP concentrations are put into contact for a given time period with a solution containing a known concentration of microorganisms. Therefore, after the NPs perform their antimicrobial activity in the liquid culture medium, it can be further inoculated onto the Petri dish with agar and incubated at the specific growth conditions, according to the target microorganisms [145,147,148]. In general, both mentioned methodologies are the most common techniques used. In case more information is needed regarding the inhibitory effect (bactericidal or bacteriostatic) or the cell damage caused by the NPs against the target microorganism, dead time tests and flow cytofluorometry methods can also be performed, among other tests [146]. A kill curve can be determined with the collected data in order to visualize the kinetics of the antimicrobial agent and to determine if it performs a bactericidal or bacteriostatic effect under certain established criteria (Figure 4). The variations in this method can determine the synergism or antagonism between two or more antimicrobial compounds, according to a ≥2 log difference in the antimicrobial activity between the compounds used, and the best constituent can be determined after 24 h of incubation [149]. The cytofluorometry method is an advanced and important technique that allows for the accurate identification of a cell population with fluorescence tagging by using a flow cytometer and a photodetector [150]. In comparison to other techniques, this method requires expensive equipment and, therefore, it is more commonly used in medical analysis and clinical medicine. With the techniques mentioned earlier, it is possible to determine significant differences between a wide variety of antimicrobial compounds against common food pathogens in a simpler way without major equipment requirements. Mechanisms of antimicrobial action The exact antibacterial mechanisms of NPs are being exhaustively investigated and some processes have been elucidated, including oxidative stress induction, metal ion release, and nonoxidative damage, which affect different structures from different microorganisms. Reactive oxygen species are a group of molecules (or reactive intermediates) that even though they exist in nature for a short period of time (half-life varying between 10 −9 and 10 −3 s) they have a great oxidative potential that can eventually be toxic to microorganisms [151]. Superoxide radicals (O 2 − ), hydroxyl radicals (•OH), hydrogen peroxide (H 2 O 2 ), and singlet oxygen ( 1 O 2 ) are the most well-known ROS. The mechanism that better explains the synthesis of ROS from NPs is based on their photocatalytic activity ( Figure 5). Metal compounds receive enough energy from light irradiation to excite and mobilize an electron from the valence band to the conduction band, leaving a highly reactive gap (H + ). This zone becomes a ROS source as it interacts with H 2 O or OH − that surrounds the nanoparticles [152]. In addition to molecules such as ascorbic acid, carotene, and tocopherol, microorganisms have an enzymatic antioxidant defense system, including catalase and superoxide dismutase (SOD), which controls the oxidative stress, reducing lipid peroxidation and the effects of ROS radicals, such as OH 2 •− and OH • . At normal aerobic microorganism conditions, the production and clearance of ROS in cells are balanced by those enzymatic systems. Nevertheless, when these reactive species are in excess, a set of redox reactions can lead to cell death by the alteration of different essential structures (such as cell membrane, DNA, proteins, and electron transport chain) and metabolic routes which are responsible for maintaining the normal morphological and physiological cellular functions [153]. In addition to the oxidative stress, released metal ions from the metal oxide NPs can spread through the cell membrane into the cytoplasm and organelles. Metallic ions can interact with the functional groups of proteins and nucleic acids, such as thiol (-SH), amino (-NH), and carboxyl (-COOH) groups, and therefore, might affect the enzymatic activities and several protein structures. Although the metal ions released are not the main source of damage caused by NPs, it is important to mention that some authors have identified them as good carriers of other antimicrobial molecules, improving their transport to the target [154], which offers protection against resistance by the target bacteria, and facilitates the permeation through the cell membrane. Metal ions can also allow for the combination of multiple antimicrobial agents in the same NP in order to improve their effects and overcome resistance mechanisms, such as the efflux pump systems. The absence of lipid peroxidation biomarkers and a small amount of metal ions detected by energy-dispersive X-ray spectroscopy in bacteria in the presence of MeO NPs have confirmed that oxidative damage and metal ion release are not exclusive antimicrobial mechanisms [155]. Critical cellular processes related to the proteins, including amino acid, carbohydrate, and nucleotide metabolisms, are significantly reduced, leading to cell death. The combination of oxidative stress, metal ion release, and nonoxidative damage affects cell structures upon NP exposure in several ways. In the following sections, these cell damage cases will be briefly explained. Cell wall damage The bacterial cell wall provides rigidity, shape, and protection to the cell against osmotic rupture and mechanical damage. It is the first barrier against any harmful particles from the environment, such as oxidative molecules. Every type of microorganism has a different cell wall composition: i) fungi and yeast are mainly composed of chitin and polysaccharides; ii) Gram-positive bacteria contain many layers of peptidoglycan and teichoic acid (20-50 nm); and iii) Gram-negative bacteria present a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides and lipoproteins [156,157]. Therefore, the cell wall damage caused by NPs can occur through different processes. Many studies have shown that NPs present better activity against Gram-positive bacteria in comparison to Gram-negative bacteria. The presence of negative charges, given by the lipopolysaccharides in the outer membrane in Gram-negative bacteria, slightly attract NPs [158]. In addition, the double membrane acts as a selective physical barrier against hydrophobic compounds, such as detergents and antibiotics. On the other hand, Gram-positive bacteria have a higher permeability, even with a thick layer of peptidoglycan, since the single membrane is not enough to avoid the entrance of foreign molecules. Besides, the cell wall has a higher negative charge than Gramnegative bacteria [159], given by the characteristics of peptidoglycan and teichoic acid structures which strongly attract NPs, resulting in cell membrane damage and cell death [160]. Silver, gold, zinc oxide, and titanium dioxide NPs can be attracted to the cell wall by electrostatic attraction [161], van der Waals forces [162], and hydrophobic interactions [163], inducing changes in the shape, function and permeability of the cells. Proteins and DNA Proteins play a fundamental role in microorganism-catalyzing metabolic reactions and are a fundamental part of cellular structures. Proteomic analysis has revealed deregulation in proteins involved in nitrogen metabolism, electron transfer, and substance transport in the presence of CuO NPs [164]. Silver ions released from Ag NPs can affect the expression of the ribosomal subunit that interacts with sulfur-and phosphorus-containing groups of proteins, even in the cell wall and plasma membrane bacteria [165,166]. Cui et al. (2012) showed that Au NPs prevented the combination of a ribosomal subunit with tRNA and collapsed the membrane potential (Figure 6a), inhibiting the ATPase activity. This, in turn, reduced the ATP levels and stimulated the generation of ROS, simultaneously affecting other structures (Figure 6b) [167]. Genomic analysis has shown that TiO 2 NPs can affect regulatory microorganism metabolic replication, transcription, and cell division since ROS can generate DNA mutations. These modifications may target the sugar-phosphate or the nucleobases and cause saccharide fragmentation and strand break [160]. This cleavage induced by the nanoparticles was studied in the pBR322 plasmid in the presence of Ag NPs via electrophoresis [168]. The results showed that guanine is the most affected nucleobase due to its low redox potential, and its oxidation produces a wide variety of modifications that ultimately affect DNA function (Figure 7) [169]. NPs can not only affect bacteria but also other complex multicellular organisms through the induced genetic damage [170]. Changes in expression of metabolic genes Every enzymatic detoxification system (e.g., SOD, glutathione and catalase) is regulated by a signal which senses the ROS level and changes the expression level of certain set of genes in a way to protect against and minimize the oxidative damage. In addition, microorganisms can modify metabolic routes and redirect resources to repair and reinforce damaged structures, such as the cell membrane or the DNA itself, through the overexpression of genes related to those functions. In Escherichia coli and Pseudomonas putida, genes related to the general stress response were upregulated. Genes protecting against hydrogen peroxide oxidative damage, catalase/hydroperoxidase, superoxide radicals degradation genes, superoxide dismutase, and superoxide removal transcriptional activator, were upregulated in a range varying from 3.2-fold to 9.2-fold after a 2 h incubation period with Ag NPs [171]. Conclusion The research related to the development of novel antimicrobial nanoparticles is significantly relevant nowadays. In this review, the main issues regarding antimicrobial nanoparticles, including their synthesis techniques, types, characterization of their properties, and their antimicrobial mechanisms, are discussed. Different methods used for obtaining nanoparticles based on the traditional physical and chemical procedures have been compared, as well as the most innovative technologies based on the so-called green synthesis method, which is attracting much attention lately due to its reduced environmental impact. Most nanoparticles are based on metal and metal-oxide compounds, and the strategies used to control their delivery and to increase their antimicrobial activity have been related to the use of silica nanoparticles in their manufacturing process. Although the oxidative stress is the main mechanism by which the nanoparticles can eliminate microorganisms, other processes may be intimately related to the NP antimicrobial activity. The information presented in this work encourages the search for new nanomaterials with controlled morphology and dimensions. In addition, synthesis processes that allow nanomaterials to be processed in a control manner and result in ecologically friendly materials were highlighted. In addition, this review highlights the need for further investigation into the possible mechanisms of these nanoparticles and the development of new substances with high antimicrobial activity. Future Perspectives The generation of reactive oxygen species is the main mechanism by which nanoparticles can trigger antimicrobial activity, the degree of which can vary depending on their material, morphology, and size. This antimicrobial activity can be used in numerous sectors, such as textile, animal, or antimicrobial packaging industries. In the latter, NPs are used to inhibit and control microbial growth, resist against the penetration of liquids or gases, retain moisture, and maintain packaged food shelf life. The global market for antimicrobial materials and packaging has demonstrated significant growth, which leads us to think that there will be a strong increase in the demand for nanoparticles with antimicrobial properties in which synthesis processes can be industrially scaled. In order to combat nosocomial infections (which are a current and urgent worldwide problem), another potential application of metal-based nanoparticles with antimicrobial activity is the coating of the surfaces of noncritical equipment in medical care facilities in order to create antimicrobial surfaces. At this point, it must be emphasized that microorganisms are becoming increasingly resistant to disinfectants as well as to traditional antibiotics, and nanoparticles with antimicrobial properties can be an effective complement in the fight against these pathogenic microorganisms. However, it is essential to keep in mind that it is not only important to develop potential applications for antimicrobial NPs, but also to follow safety regulations that allow for the control of inhalation, migration, skin penetration, and ingestion of nanoparticles, which could potentially induce human health issues. It should also be kept in mind that metal-based nanoparticles are toxic to many organisms at high concentrations and discarding these NPs in the environment can introduce serious environmental problems.
2020-09-25T09:36:45.147Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "faaff23666dd0c2e20294eae50973163b1324584", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-11-129.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "faaff23666dd0c2e20294eae50973163b1324584", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
46320782
pes2o/s2orc
v3-fos-license
Lipid Content of Antibiotic-Resistant and -Sensitive Strains of Serratia marcescens The lipid content of antibiotic-resistant, nonpigmented strain (Bizio) and antibiotic-sensitive, pigmented strain (08) of Serratia marcescens was studied. The resistant strain contains at least three times more total extractable lipid and phospholipid than the sensitive strain. Lysophosphatidylethanolamine, phosphatidylserine, lecithin, phosphatidylglycerol, phosphatidylethanolamine, and polyglycerolphosphatide were identified in the phospholipid fractions of both strains. The cell wall is the first component of most bacteria to interact with the environment. The structure of the cell walls, in general, contains a rigid network of murein which consists of polypeptide and polysaccharide. For gram-negative bacteria, in addition to murein, the cell walls contain up to 25% of their weight in lipoprotein and lipopolysaccharide and a smooth, soft, lipid-rich outer covering. The lipids in this outer layer are varied in bacteria, but, in general, the gram-negative bacteria contain three times more lipid than the gram-positive bacteria (21). It has also been found, in the case of certain antibiotic-resistant bacteria such as Escherichia coli (27), Staphylococcus aureus (15), Rhizobium meliloti (20), and Streptococcus pyogenes (13,21), that the increase in the proportion of the extractable lipids (i.e., the outer layer lipids) closely paralleled an increase in resistance. It has been suggested that the high content of phospholipids in the extractable lipid of the cell envelope may create a permeability barrier for the passage of certain antibiotics to the cytoplasmic membrane (15,20). Serratia marcescens is one of the members of the gram-negative bacteria. Certain strains of S. marcescens produce a characteristic red pigment. Some mutant strains produce a rose pigment, whereas others are nonpigmented, such as the Bizio strain. In some cases, however, the bacteria of the same strain may exhibit different colors, depending on the age of the bacteria (17). S. marcescens, formerly considered nonpathogenic, has recently been recognized with increasing frequency as the cause of certain clinical diseases (7,25). Most of the strains indicted as infectious were nonpigmented, including the Bizio strains (8,22). Characteristically, these nonpigmented strains have been resistant to most antimicrobial agents (7,22,23). The resistance of S. marcescens to antimicrobial drugs such as quaternary ammonium compound has been studied (5,6). Simple staining by Sudan Black B and removal of resistance by the action of lipase supported the belief that the acquired resistance is dependent upon the increased lipid content of the resistant cell. Unfortunately, the method used was less than quantitative and precise. In this communication, we reported the results of lipid analysis from two strains of S. marcescens: the antibiotic-resistant, nonpigmented Bizio and the antibiotic-sensitive, pigmented 08 strain. MATERIALS AND METHODS Bacteria. Both the S. marcescens 08 (pigmented) and the Bizio (nonpigmented) strains, grown on either an inorganic or an enriched medium (1,24), were supplied by General Biochemicals, Chagrin Falls, Ohio. The cells were harvested at the late log phase. Whole cells were isolated by washing the cell paste twice with distilled water, centrifugation at 10,000 rev/min for 10 min, and lyophilization. The dry whole cells were stored in a vacuum desiccator. Cell walls were isolated from lyophilized whole cells by sonic treatment and centrifugation by the method of Williams (26), as modified by Tsang (24). Extraction of free lipids from whole cell and isolated cell wall. Lyophilized whole cells and cell walls (2 to 5 g) were extracted by the method of Huston (16) and washed by the method of Folch (12). The dry weights of the lipids were measured and recorded. The known quantity of lipids was then 972 on December 13, 2020 by guest http://aem.asm.org/ Downloaded from redissolved in a measured quantity of the solvent chloroform-methanol (2:1, v/v) and stored at 5 C. Separation of lipids. The lipids were separated on 20-by 20-cm thin-layer plates which had been spread with Silica Gel G (Brinkmann Instruments, Inc., Westbury, N.Y.) to a 0.25 mm thickness. The plates were heat activated for 1 hr at 110 C and stored in a cabinet where the humidity was controlled until ready for use. All procedures were carried out at room temperature. The lipids were first separated into phospholipid, free fatty acid, and triglyceride fractions by using the solvent system I: hexane-etheracetic acid (90: 10: 1, v/v/v). The phospholipids were scraped off from the thin-layer plates and eluted from the Silica Gel G by using chloroform-methanol (2: 1, v/v) solvent. In some cases, the neutral lipids were removed by acetone extraction directly, and the completeness of removal was determined by thinlayer chromatography on solvent system I. The phospholipids were dried, weighed, and redissolved in a known quantity of the chloroform-methanol (2:1, v/v) mixture. The known concentration of the phospholipid solution was examined by one-and twodimensional thin-layer chromatography by using the solvent system II: chloroform-methanol-water (65:25:4, v/v/v) for the first dimension, and solvent system III: chloroform-methanol-water (80:20:2, v/v/v) for the second dimension. The first dimension required 70 min and the second dimension took about 50 min. The phospholipid standards consisted of cardiolipin, lysolecithin, lecithin, phosphatidylethanolamine, phosphatidylserine, and sphingomyelin (Supelco, Inc., Bellefonte, Pa.). Molybdenum blue stain was used to detect the phospholipid. Ninhydrin spray (Brinkman Instruments, Inc., Westbury, N.Y.) was used to detect the phospholipid with the free amino group. Micromethod for phosphorus determination. The method of Bartlett (3) was used with a little modification. A sample (1-3 ml) of phospholipid containing 25 to 75 gg was transferred to an acidwashed tube and evaporated to dryness on a steam bath. Then, 0.5 ml of 70% perchloric acid was added. The sample was digested at 200 to 250 C for 2 hr. After the tube was cooled to room temperature, 1.0 ml of water, 3.0 ml of 0.4% ammonium molybdate, and 0.2 ml of Fiske-SubbaRow reagent were added. The solution was heated in a boiling water bath for 15 min. After cooling and adjustment to a final volume of 5.0 ml with distilled water, the absorbance of the solution was read at 810 nm. The standard used in this method was anhydrous KH2P04. A range of 1 to 6 ;g of phosphorus in the standard was used. RESULTS AND DISCUSSION Various quantities of extractable lipid were obtained, depending upon the strain and media used. In the Bizio strain, the contents of the extractable lipid from whole cells grown on inorganic and enriched media were 53.3 and 34.5%, respectively, compared with 19.5 and 8.1% from whole cells of 08 strain grown on the corresponding media (Table 1). Regardless of the types of media used, the quantity of extractable lipid from the whole cell of Bizio (antibiotic-resistant) was approximately three times more than that of the 08 strain (antibiotic-sensitive), although the amounts did vary when inorganic versus enriched media was used. The phospholipid content in the total extractable lipid from whole cells was determined by both phosphorus determination and gravimetric methods after the neutral lipids were removed by acetone extraction. The amount of phospholipid in the extractable lipid was estimated by multiplying the phosphorus content by 25 ( Table 2). The percentages of phospholipid in the total extractable lipid in Bizio and 08 strains were approximately the same (1.75%). Since the quantity of extractable lipid in the whole cell of Bizio was three to four times more than that of the 08 strain, it can be assumed that the Bizio strain contained at least three times as much phospholipid in the total 24,1972 on December 13, 2020 by guest http://aem.asm.org/ Downloaded from CHANG, MOL extractable lipid. This assumption was confirmed by a separate method. This was accomplished by determining phospholipid gravimetrically after the neutral lipids were removed by exhaustive acetone extraction. The ratio of the content of phospholipid from Bizio to that of 08 was 3: 1 ( Table 2). In separate experiments, the phospholipid in the cell wall of 08 and Bizio was determined by the same method. In this case, the phospholipids of Bizio and 08 were found to be 8 to 9% and 2 to 3%, respectively. Again, the ratio is approximately 3: 1 ( Table 2). By means of one-and two-dimensional thinlayer chromatography on Silica Gel G, eight 10 AlR, AND TSANG APPL. MICROBIOL. different major components were observed in the phospholipid fractions of both strains. The results are presented in Fig. 1 and 2. Lysophosphatidylethanolamine, phosphatidylserine, lecithin, phosphatidylglycerol, phosphatidylethanolamine, and polyglycerolphosphatides were identified (Table 3). In both cases, phosphatidylethanolamine was present as the major component. Other than that the pigment (prodigiosin) was an additional component identified in 08 strain, no major strain differences were observed. These results were consistent with those of Kates et al., who investigated the composition of lipids from S. marcescens Tem- ple University strain (17). In the same study, they found that the older cells (pigmented) and the younger cell (nonpigmented) of the same strain contained approximately the same amount of extractable lipid (6-10%), with 38% in the pigmented cells and 44% in the nonpigmented cells being phospholipids. It was concluded that despite the presence of pigment (prodigiosin) in the physiologically older cells, there was no significant difference in the total extractable lipid as well as phospholipid content. In our experiments, the yield of extractable lipid from the antibiotic-sensitive, pigmented 08 strain ranged from 8.1 to 19.5%, whereas those of the Bizio strain ranged from 34.5 to 53.3%, depending on the culture media ( Table 1). The value (8.1%) of total extractable lipid in the 08 strain is reminiscent of the result (6-10%) reported by Kates et al. However, the amount of extractable lipid present in the antibioticresistant, nonpigmented Bizio strain was at least three times higher than that of Kates et al. According to the studies of Norrington and 20 James (21) and others (13,15,20,27), the total extractable lipid in antibiotic-resistant strains was usually two to five times higher than that of the antibiotic-sensitive ones ( Table 4). In our case, we found in the antibiotic-resistant strain (Bizio) that the amount of total extractable lipid and the phospholipid was three times as In recent years, the structure and function of the relationship of cell wall components to bacterial antibiotic resistance have been studied extensively (24). Studies by a number of investigators (5,6,19) have suggested that increased synthesis of lipid may be a factor in the resistance of bacteria to antibacterial substances (2,5,6,19,27). More recently, the alteration of fatty acid composition of gramnegative bacteria was noted as a consequence of antibiotic resistance (9). The increase of lipid synthesis in the cell wall and alteration of fatty acid composition may create a permeability barrier for uptake of antibiotics by the outer membrane (lipopolysaccharide-lipoprotein complex) or by the cytoplasmic membrane, or both. In any event, the role of phospholipid in these systems is immensely important. The coenzymatic role of phospholipid in the biosynthesis of lipopolysaccharide and cell wall components is well documented (10,11,18). In addition to the permeability factor for the transport of antibiotics, the role of phospholipids as a cofactor for the enhancement of any degradative enzymatic activity toward antibiotics should also be considered.
2018-04-03T06:18:04.152Z
1972-12-01T00:00:00.000
{ "year": 1972, "sha1": "1ca11dc07e70a8e3efc231241613f7d7f267940e", "oa_license": null, "oa_url": "https://doi.org/10.1128/aem.24.6.972-976.1972", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9196db59f69a890866d27f75e8df7fdc3f6667d0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54089544
pes2o/s2orc
v3-fos-license
Pre-service Science and Mathematics Teachers ' Thoughts about Technology This study aims to investigate pre-service teachers’ opinions about the technology. In this respect, the opinions of pre-service science and mathematics teachers were taken. The study was carried out at a university, located in the capital of Turkey. The data were collected from 20 pre-service teachers in the department of secondary school science and mathematics education. Criterion sampling was used. Interview form was developed to collect data. The results were presented in four themes: the role of technology in pre-service teacher’s life, pre-service teachers’ preferences about technology, pre-service teachers’ technology competence, and pre-service teachers’ opinions about use of technology in educational settings. It is expected that the results of the study will contribute to the revision of teacher education programs in terms of technology use and cultivate the learning environment by using technology more effectively. Introduction Considering information as the smallest building block of individual learning, it is impossible to ignore the effect of information-rich environment on the learning.Information-rich environment means that the mediums include television, radio and traditional media such as gaming tools, and also the Internet and its servers, as well as traditional schools, libraries and museums [26].Among these mediums, Information and Communication Technologies (ICTs) are the best information sources in that they facilitate communication which is necessary to be a part of the information society [16,25,34].Especially ICTs with internet such as mobile phones, laptop computers and other mobile devices are vital for the mobilization of communication [25] and reaching the richest information environment.Because the internet is a learning source which aids users immensely in acquiring rich information by responding to each individual question, giving instant feedback to each individual response and possessing a hosting capability to individuals' generated content [26].That is to say, recent technology, which is used innovatively, has a significant potential to support learning in the classroom [16]. Hence, using ICT in the classrooms gains importance day by day for the educational systems around the world [14,17,22], so integrating technology into instruction is a necessity and teachers should know how to integrate technology in classrooms [16].Actually, it is expected that not only teachers but also students use technology such as computers as a tool to search, organize, analyze and share or exchange information so schools should educate students as ICT literate which means applying technology effectively [28], in that students should have ICT knowledge and skills to be able to successfully respond to the educational needs of 21st century [22].In other words, it is important to have knowledge about technology as well as science and mathematics because all these disciplines are essential for students to adapt their roles in constantly changing world [27,35].Hence the pre-service teachers specialize in these disciplines have a critical role on educating next generation as pioneers or leaders who make great contribution to develop a country [35].In this regard, this research focuses on taking opinions of pre-service science and mathematics teachers about technology and its integration to their professional field. Theoretical Framework ICT is an effective tool to improve the quality of teaching and learning [41].In other words, it is a significant tool for bettering performance, cooperation, learning experience and learning outcomes [2].Also it provides a thoroughgoing change in school practices and so it helps to prepare students for the future [41].Surely using traditional print sources are as important as digital sources, yet the contribution of digital tools to the teaching-learning environment cannot be denied [16].For instance, technology provides new learning culture by the help of learning through teacher presentation; self -regulated learning by resources such as books or multimedia-based content and collaborative interactive learning with other learners through ICT [31].Likewise, Webb [40] emphasized that ICT facilitates learning, enables making connections between knowledge and real life experiences, provides students with the ability to manage their own progress, as well as simplified data collection and presentation process. Moreover, ICT integration to teaching-learning process can facilitate learning depending on subject matter [21,26].That is to say, some concepts can be taught by ICT more easily, on the other hand, in order to teach others, simple models can be enough.For instance, in science course a three-dimensional model of the human heart can be a better option for showing the difference between the arterial and a vein than a clip of a beating heart.In contrast, for teaching healthy versus unhealthy heartbeat rhythm, a clip can come to the forefront in terms of facilitating learning.While visual and tactile models serve to remind the components of the heart, the clip depicting a beating heart helps to learn the process [26].Also as an example for ICT use in mathematics course, the results of Ural [38]'s study conducted in Turkey can be given.This study's results show that secondary school mathematic teachers use software called as Morpa and Vitamin to teach the topics in geometry which requires visuality, PowerPoint presentations to teach solid objects, fractals, and animations and videos on the internet to teach solid materials, triangles, fractals, patterns and decorations, equations and symmetry.Taking all of these applications into consideration, it is vital to use technology in teaching-learning process appropriately [21]. Although ICT is a kind of facilitator for learning when it is used appropriately, there are some obstacles in the worldwide ICT applications.Lack of computers [1,10,29,38] or other ICT equipment, having difficulty about connecting to the internet and lack of access to software programs [4,7] and lack of ICT knowledge or competence are the major obstacles [1,2,4,29].In addition to all these obstacles, teachers think that learning, planning and applying technology take time [7,38] and cause lots of workload which hinders ICT applications [10].Because of these reasons, many teachers do not prefer to use ICT in their instruction.For instance, the study results of Jimoyiannis & Komis [17] indicated that the majority of teachers couldn't give at least one example when ICT applications used in their instruction were asked.In addition, the same study also shows that very few teachers mentioned to use presentation software, the internet in order to support their traditional instruction and basic ICT applications such as word processing, spreadsheets…etc.for lesson preparation purposes. After all, to minimize the drawbacks in ICT application it is necessary for teachers to adapt to their role in ICT integration process.The roles of teacher in this process are related with choosing ICT resources, utilizing them in the classroom and providing students to interact with materials.Indeed, teacher is facilitator who helps to link up between the students and the computers in ICT integration process, along with the fact that they are advisors, mentors, and planners [22].At this juncture, the belief of teachers about the ICT ability is crucial, because their belief facilitates the process of their decision making concerning how they will behave with their knowledge and skills about ICT.Especially, self-efficacy belief about ICT which is related to attitude towards ICT and competency levels of using computers, features in these decision making process [14], because teachers who think that they are qualified and competent in ICT are more successful in ICT integration process [20].Besides in the literature there are lots of researches which emphasize the relation between self-efficacy belief and ICT integration.For instance, a study shows that there is average level positive correlation between self-efficacy belief of teachers regarding computers and their attitude towards computer integrated education [9].Another study reveals that the prospective technology use of pre-service teachers significantly correlates with their self-efficacy beliefs about technology integration [3] similar to the study indicates that prospective ICT integration of pre-service teachers is related with teacher self-efficacy, computer self-efficacy and computer attitudes in education [32].Also, teachers' experience with technological devices enhances their self-efficacy beliefs about ICT use [5,19] and it also affects ICT integration process [20].Like teachers' experience with technology, motivation is another important factor that affects ICT use [33].That is to say, the motivational factors such as teachers' belief concerning their capability to use Information Technology (IT), their satisfaction with the level of resources available or IT, and considering that IT use in teaching is interesting and enjoyable, correlates most positively with ICT use whereas the difficulties encountered in using IT are the negative factors [6]. As seen from the literature review, teachers' positive thoughts, beliefs or perceptions about technology and their self-competence towards technology use can facilitate ICT integration process.Also this study is important to see what kinds of technologies preservice teachers prefer to use and plan to use for educational purposes.This shows us what kinds of technologies they have knowledge about.Therefore, this study has the characteristics of needs analysis and it aims to reveal pre-service teachers' thoughts about technology in order to have a general idea about the prospective ICT integration. Design This is a qualitative study which aims to identify pre-service science and mathematics teachers' opinions about technology use.The data is collected by interviews, a method used by qualitative researchers most often [13]. Participants The study was carried out at Hacettepe University located in Ankara, Turkey.The data was collected from 20 pre-service teachers in the departments of teaching physics, chemistry, biology and mathematics.Five pre-service teachers from each department participated in this study.Criterion sampling which is a type of purposive sampling was used in this study.In order to select participants, taking a material development course about technology was used as a criterion.This course aims to introduce properties of some instructional technologies, to give information about the role and the use of these technologies in the science or mathematics teaching processes and to develop 2-and 3-dimensional teaching materials using instructional technologies. Data Collection Procedure Semi-structured interview form was used while data was being collected.Before preparing the interview form, the researchers conducted a thorough review of the literature.The interview form includes four main questions and some probes in order to search for the answers to the research questions.These ).Also the researchers prepared some warm up questions to establish a rapport between the interviewees and the interviewer. Before conducting the interview, the dimensions of the inquiries in the interview form were discussed with two computer and technology professionals and two curriculum and instruction professionals, as they checked the interview form and then the pilot testing of the interview form was done with three pre-service teachers.Then face-to face semi-structured interviews were conducted with twenty pre-service teachers in order to gather their perceptions, specific experiences, opinions and behaviors relating to the research questions.During the interview, the researchers used open ended questions that pre-service teachers understand easily, along with probes and follow-up questions to increase the richness of the data during the interview.Interviews took approximately 15 minutes for each person.A voice recorder was also used. Data Analysis Firstly, the researchers read the literature related with technology and the themes of the research were identified.After semi structured interviews were conducted, their interview content was transcribed for later coding.In addition, the researchers carefully read the transcripts of interviews by moving back and forth across the documents and analyzed these texts either line by line or word by word deductively in search of codes related with technology.In this step of analysis, the predefined themes acted as guidelines for the researchers to determine the list of codes.After the analysis of documents, the target codes were determined.Then it was counted how many pre-service teachers presented an opinion about each code and these coding decisions in each category were presented with numbers.According to Fraenkel, Wallen & Hyun [13], counting is seen as an important characteristic of various content analyses.Also it is possible to utilize counting when working self-consciously with frequencies [24], so that the researcher is able to show the similarities and differences in the opinions of science and mathematics pre-service teachers by counting the codes obtained from their opinion statements. Validity and Reliability In this research, internal validity was provided by peer debriefing (enabled by talking with pre-service teachers), member checks (enabled by checking of data with stakeholders such as colleagues and pre-service teachers with whom the interviews were conducted).Moreover, the external validity of data was ensured by transferability.Because of this reason, purposive criterion sampling was used to encourage broader applicability.Also pre-service science teachers were selected from different departments such as chemistry, biology and physics to generalize the results to a broader scope.Findings were presented as clear as possible to enable adequate comparison with other samples.Also code and theme lists were presented for comparing whether they were congruent with technology and to enable other researchers to check their code lists. On the other hand, replicability of study, in other words, its reliability is also important [23].For internal reliability, the researchers tried to express research questions as clear as possible and they also tried to ask their research questions congruent with the descriptive study design.In the interview, they tried to ask questions to all participants by using similar approaches.A voice recorder was also utilized for the interview.Also one of the researchers' colleagues read the 10 pages of raw data.Then inter-coder reliability was calculated, pointing to a figure of .83.Research shows if the reliability is as high as 70%, it is magnificent [24].For external reliability, the researchers tried to present all data as clear as possible. Results The results of this study are presented in four themes clarifying the research questions.These themes are about the role of technology in pre-service teacher's life, pre-service teachers' preferences about technologies, pre-service teachers' technology competence, and pre-service teachers' opinions about technology use in educational settings.The results regarding these themes are presented respectively as follows; The Role of Technology in Pre-service Teacher's Life When the data was examined, it was seen that pre-service teachers' opinions were collected under three levels of importance.This data were presented in Table 1. According to Table 1, fourteen of twenty pre-service teachers pointed out that technology has an important role in their daily lives.Five teachers expressed that technology was partially important.They also stated that technology enables them to follow daily life events and do research.In contrast, only one biology teacher expressed that she did not turn on the computer if it was not necessary. Pre-service Teachers' Preferences about Technologies Used for Educational Purposes Pre-service teachers' preferences about technologies are grouped under two headings.These are "technological devices used most commonly by pre-service teachers' for educational purposes" and "pre-service teachers' preferences about technologies intended for their classroom use when they become teachers".The results for the technological devices used most commonly by pre-service teachers' for educational purposes are presented in Table 2. Table 2 indicates that all pre-service teachers stated that they used computer for educational purposes.In other words, they stated that they used search engines such as Google, and web pages related with their professions such as web site of the scientific and technological research council of Turkey (TUBITAK) and they also said they used Dailymotion and YouTube for video sharing.Pre-service chemistry and biology teachers stated that they used PowerPoint for educational purposes and also mathematics teachers used Geogebra, Cabri and Mapple as computer programs.In addition to this, one of the physics pre-service teachers gave Multimeters as an example of tools used in laboratory and electronic measurement tools.Additionally, pre-service teachers also mentioned the technologies they intended to use in their lessons when they become teachers.The results are presented in Table 3.According to table 3, majority of the sample expressed that they will use educational software (f=8).Pre-service teachers stated they will use educational software for explaining things with 3 dimensions such as molecules and prism and with simulations related to experiments.Also six pre-service teachers expressed that they would use video in their lessons when they become teachers.One of them emphasized that the life of Einstein may be shown via video.Four of the teachers expressed that they would use smart board and projection in their lessons when they become teachers.Moreover, one teacher expressed an opinion about the use of tablet and two of them expressed their opinions on the use of overhead projectors.Two of the teachers expressed opinions about the use of overhead projectors and only one physics teacher expressed an opinion on the use of tablet. Pre-service Teachers' Technology Competence Pre-service teachers evaluated their competency level of using technology.Their opinions were classified as very good, good, average, bad and very bad.The results related with this were presented in Table 4. Table 4 shows that most of the pre-service teachers stated that they used technology on average level (f=9).Two of them evaluated themselves as very competent in terms of technology use.None of them thinks that competency level of using technology is very bad. Pre-service Teachers' Opinions about Technology Use in Settings This theme consists of two sub-themes.These are positive opinions and negative opinions on the use of technology in educational settings.Results about these sub-themes are presented as follows; Positive Opinions on the Use of Technology in Educational Settings The positive opinions of pre-service teachers about technology are categorized in terms of teachers and students.These opinions about technology belonging to teachers are presented in Table 5. According to the Table 5, the code "technology enables managing time effectively" is expressed by more pre-service teachers compared to other codes (f=10).A mathematics teacher in the sample presented ideas about this code as follows: "If 15 problems were normally to be answered in 40 minutes, this doubled when smart board was used in geometry class.This provided students with more different types of questions". In addition to the codes in table 5, it was determined that there were some extra codes belonging to subject area.These codes are presented in table 6. 6 shows the codes reflecting the positive opinions of teachers like Table 5.But Table 6 also includes the codes belonging to subject area such as physics and mathematics.For instance, second and third codes are related with experiments and while pre-service science teachers expressed their opinions about this code, mathematics did not express their opinion as could be expected.This is the reason for the fact that Table 6 was submitted separately from Table 5 although both reflect the opinions of teachers. Most of pre-service science teachers preferred the code "easy demonstration of experiments which cannot be performed in the classroom" (f=6).A biology pre-service teacher expressed ideas about this as follows: "It is economical.We can't carry out such an experiment in the class but there is an example experiment on that thing.The student presses the button and, say, clicks on the acids.If he adds wrong amount of it, the experiment fails.I would certainly like to carry it out in the class but I can't.However, I can do it that way.Because those chemicals are expensive, they are hard to acquire.Time-wise, it is not always possible to use the labs in schools".Also, the same biology teacher explained the possibility of making experiments safely in virtual environment as follows: "Those chemicals are also hazardous.Yet, they conduct the experiments by themselves in the program.It has many model experiments.For example, he says he is adding amylase and he adds it.Then, he chooses the wrong chemical, the reaction doesn't occur". In addition to this, a mathematics teacher stated that technology ensures practical equation solving as follows: "There is a mathematics program called Alpha. Difficult mathematical operations such as integral and derivation are linearly entered. And it gives you the solution step-by-step". Also pre-service teachers expressed the positive opinions about using technology in terms of students.Those ideas are presented in Table 7.According to Table 7, majority of pre-service teachers thought that technology enabled "capturing students' attention" (f=12).In addition to this, another majority of pre-service teachers mentioned that technology enabled concretization of abstract knowledge and retention of knowledge (f=10).A chemistry teacher in the sample presented ideas on technology's attention-capturing qualities as follows: "Technology captures attention of the people.Once teachers use technology and show something visually, it captures our attention more compared to the teacher who come to the classroom and say something verbally, in that we like computer.In other words when teachers show something via computer, I do not think that someone makes noise too much in the classroom, because different things capture attention of everybody". Negative Opinions on the Use of Technology in Educational Settings The negative opinions of pre-service teachers about technology are also categorized in terms of teachers, students and learning environment.These opinions about technology for teachers are presented in Table 8. According to Table 8, the code "time consuming" is expressed by more pre-service teachers compared to other codes (f=9).According to these results, it can be concluded that while technology use may sometimes be time consuming, other times it enables better time management (see table 5).In other words, provided that teachers know how to use and where to use technology effectively, it can help with time management.A physics teacher summarized this as follows: "Teachers need to learn how to use technology in order to teach how to technology.It will take 15 minutes to turn on the computer and set up the projector if the teacher doesn't know how to do it, or he wastes time saying "What do I have to do now?" if he doesn't know how to use a program.It does more harm than good.It wastes time…". In addition to pre-service teachers' negative opinions for teachers, it was also seen that there were some codes reflecting negative opinions for students.These codes are presented in Table 9. According to the Table 9, the code "Distractibility" is expressed by more pre-service teachers compared to other codes (f=7) whereas "capturing attention" (see table 7) was thought as a positive aspect of technology by the pre-service teachers.They expressed that if teachers used technology passively, it caused students to lose their attention.In addition to this, pre-service teachers thought that distractibility can occur on the ground that technology provides lots of stimuli all at once.Also, pre-service teachers stated some negative opinions on the use of technology related with learning environment.These codes are presented in Table 10. According to the table 10, the code "technical problems" is expressed by more teachers compared to other codes (f=6).So it can be said that pre-service teachers are worried about possible technical problems.A biology teacher expressed ideas about this as follows: "There are teachers who have prepared their slides but couldn't use them because of a malfunctioning computer.For example, power shortage is a problem". A pre-service mathematics teacher also mentioned technical problems as follows: "It is a tool which can be broken.It's very delicate.Sometimes, there can be technical problems and they can't be fixed right away". Along with these data, the number of codes about pre-service teachers' positive opinions on the technology related with teachers and students were counted.Similarly, the number of codes about pre-service teachers' negative opinions on the technology related with teacher, students and learning environment also were counted.As the number of positive and negative codes were compared, it is concluded that pre-service teachers displayed more positive opinions (n=30) about technology use in the lesson than negative opinions (n=21). Discussion and Conclusions ICT use in teaching-learning processes becomes more significant with every passing day.Using ICTs properly in the classroom increases the quality of teaching and facilitates the learning.At this point, teachers' self-efficacy beliefs about technology [3,9,14,32], their technology competency [20,14], their experience in technology [5,19,20], and positive opinions about technology [6,33] have a crucial role on their ICT applications.Also literature supports that prospective ICT integration or prospective technology use of pre-service teachers correlate with these factors [3,32].In this respect, this research intended to reveal the pre-service teachers' thoughts about the technology use. Initially, when the results related with the role of technology in pre-service teacher's life were examined, it was seen that technology had an important role in most of the pre-service teachers' life.Moreover, when asked about their preferences regarding to technological device, pre-service teachers expressed that computer is the most commonly used technological device for educational purposes.They also stated that they used computer for internet search and PowerPoint presentations.This finding is in accordance with the previous researches [11,17].Besides pre-service teachers participated in our research emphasized that they used software such as office, geogebra, maple, cabri, vitamin and morpakampus during their university life.Also, 40% of the sample (f=8) noted that they would prefer software most among the other technological tools for their classroom use when they become teachers.Although they preferred to use educational software most, the total number of the pre-service teachers who intended to use software is low (f=8, %40).This is consistent with the result of Petras [30]'s research about in-service teachers.According to the study of Petras [30], teachers prefer traditional hands on materials rather than technological tools. In addition to this, most of the pre-service teachers stated that their technology competence was on average level.This result can be caused by their average level experience related with technology just as the study results of Fleming, Motamedi & May [12] show that the more pre-service teachers use computer technology, the more they become competent in terms of computer technology skills.Also pre-service teachers' average technology competency can be predictor for pre-service teachers' prospective technology use, because study results about in-service teachers in the literature show that the most of the teachers are daily using internet and computer for 1-2 hours [36] and teachers have Universal Journal of Educational Research 4(3): 501-510, 2016 509 average level self-efficacy belief and attitudes [9,15]. Next, pre-service teachers stated both positive and negative opinions about technology use in educational settings, but they have more positive opinions regarding technology use than negative ones.The results of this study confirm previous findings about teachers' positive opinions related with ICT [17,30].When their positive opinions were examined in detail, it was seen that technology enables teachers to manage time effectively and captures students' attention.As consistent with this result, John & Sutherland [18] stated that technology increases student concentration because of the fact that it arouses students' interest toward the lessons.Also according to pre-service mathematics teachers, technology enables them to solve equations practically and for pre-service science teachers, it provides easy demonstration of experiments which cannot be done in the classroom and allows carrying out experiments safely in the virtual environment.When negative opinions about the technology related with teachers were examined, it was seen that technology causes time consumption because of the lack of teachers' knowledge in technology use, the digressing… etc. and also according to pre-service teachers, technology can cause distractions among students because of the presence of too many stimuli, teachers' using technology passively … etc.Similarly, in the literature teachers' lack of technology competence is emphasized [1,8].Moreover another negative opinion about technology was stated that technical problems such as malfunctions of tools, power shortages…etc.disrupted teaching-learning process. All in all, it can be deduced that pre-service teachers know the importance of technology and think more positively about it.Yet, pre-service teachers should improve their competency regarding technology.Unless they improve their technology knowledge, they will fall behind in this age of technology and so will their students, z-generation.Therefore teacher education programs should be revised for educating pre-service teachers as competent as possible in terms of technology use and teacher education classrooms should also be designed for pre-service teachers to have more experience in the use of different kinds of technological devices.Moreover, it is important to mention about limitation of this study.This research is carried out with limited number of preservice teachers, because 20 voluntary preservice teachers wanted to join this study.Also the preservice teachers in this study were all from the same university.Therefore, further studies can be carried out with more preservice teachers, in the different universities or in the other countries for generalizing the results to the broader scope. are 1) what do you think about the role of technology in your life?(For what purpose do you use technologies?)2) What kind of technologies do you prefer to use most for educational purposes?(How do you use these technologies for educational purposes?)and when you become a teacher, what kind of technologies will you prefer to use? 3) What do you think about your technology skills?(How do you evaluate yourselves in terms of your technology skills?) 4) What do you think about the effects of ICT use on instruction?(At what aspects do the ICTs contribute to the instruction?What kinds of limitations do ICTs have? Table 1 . The role of technology in the pre-service science and mathematics teachers' life Table 2 . Technological devices used most commonly by pre-service teachers' for educational purposes Table 3 . Pre-service teachers' preferences about technologies intended for their classroom use when they become teachers Table 4 . Pre-service Teachers' Self Competence about Technology Table 5 . Positive opinions on the use of technology for teachers Table 6 . Positive opinions belonging to the subject area on the use of technology for teachers Table 7 . Positive opinions on the use of technology related with students Table 8 . Negative opinions on the use of technology for teachers Table 9 . Negative opinions on the use of technology for students Table 10 . Negative opinions on the use of technology related with learning environment
2018-11-30T22:13:48.183Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "316071ec6339f25a55c7c525502d20820730c574", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20160229/UJER5-19505231.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "316071ec6339f25a55c7c525502d20820730c574", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
271274480
pes2o/s2orc
v3-fos-license
Applications of Microbial Organophosphate-Degrading Enzymes to Detoxification of Organophosphorous Compounds for Medical Countermeasures against Poisoning and Environmental Remediation Mining of organophosphorous (OPs)-degrading bacterial enzymes in collections of known bacterial strains and in natural biotopes are important research fields that lead to the isolation of novel OP-degrading enzymes. Then, implementation of strategies and methods of protein engineering and nanobiotechnology allow large-scale production of enzymes, displaying improved catalytic properties for medical uses and protection of the environment. For medical applications, the enzyme formulations must be stable in the bloodstream and upon storage and not susceptible to induce iatrogenic effects. This, in particular, includes the nanoencapsulation of bioscavengers of bacterial origin. In the application field of bioremediation, these enzymes play a crucial role in environmental cleanup by initiating the degradation of OPs, such as pesticides, in contaminated environments. In microbial cell configuration, these enzymes can break down chemical bonds of OPs and usually convert them into less toxic metabolites through a biotransformation process or contribute to their complete mineralization. In their purified state, they exhibit higher pollutant degradation efficiencies and the ability to operate under different environmental conditions. Thus, this review provides a clear overview of the current knowledge about applications of OP-reacting enzymes. It presents research works focusing on the use of these enzymes in various bioremediation strategies to mitigate environmental pollution and in medicine as alternative therapeutic means against OP poisoning. Introduction Organophosphorous compounds (OPs) are thio/oxo phosphoesters.They are highly toxic compounds widely used all over the word for multiple applications.They have been used for more than 70 years as pesticides [1], as drugs or pro-drugs in human and veterinary medicine [2], and as antiwear agents and flame retardants in industrial oils such as tricresyl phosphate [3].In particular, this later compound involved in aerotoxic syndrome may also cause accidental or criminal poisoning, e.g., in the USA during the Prohibition due its presence in adulterated alcohols and in Morocco in oil of canned fish.However, the most toxic OPs are banned chemical warfare agents (CWA) [1] like G agents (tabun, sarin, cylclohexyl-sarin and soman), V agents and A agents, which are the so-called novichoks ("newcomers").The latter compounds, about 10 times more toxic than VX and 10 4 times more toxic than OP pesticides [4], are highly stable in the environment and treatment of poisoning is very difficult [5]. The use of pesticides in agriculture worldwide has increased significantly during the past three decades, passing from about 1.8 million tons in 1990 to around 3.5 million tons in 2021, of which more than 7.5 × 10 5 tons are insecticides, which include OPs [6].Because of their toxicity, OPs, including insecticides and nerve agents, pose significant threats to human health and the environment.These compounds can persist in the environment for long periods (up to 360 days) and can contaminate food products, soil, and water sources [7].One promising approach to mitigating the risks associated with OPs is the use of a bioremediation technique for their degradation.Bioremediation is a cost-effective and environmentally friendly technique that utilizes micro-organisms (such as bacteria, fungi, and algae) or the enzymes they produce to degrade, transform, or remove contaminants from soil, water, or air.This process relies on the natural metabolic capabilities of these organisms to break down complex pollutants into less harmful substances [8,9].Many studies have reported the efficiency of using micro-organisms, such as algae, fungi, and bacteria, to degrade the complex chemical structures of OPs into simpler and less toxic molecules through different enzymatic processes [10][11][12][13][14].However, microbial degradation may be challenging to apply effectively in real environmental conditions due to several limitations: (1) the degradation process can be slow, taking a considerable time (from several weeks to several months) for the micro-organisms to degrade the pesticides completely; (2) microbial populations can show genetic instability, leading to differences in their degradation abilities; (3) less effectiveness in degrading certain types of pesticides with complex chemical structures; and (4) microbial activity can be affected by several environmental factors such as pesticide bioavailability, humidity, pH, salinity, and temperature, whose fluctuations may reduce degradation effectiveness [15,16].To overcome these constraints, the use of purified degrading enzymes offers a number of advantages over entire cell systems.These include the potential for specifically targeting organic pollutants with a higher speed of degradation and efficiency, the innocuity of the process, which does not produce any risky by-products, unlike microbial processes, and the ability to perform in a variety of environmental situations [17].Furthermore, this approach can be used in large-scale bioremediation strategies, such as in situ and ex situ biodegradation of OPs, offering sustainable and cost-effective solutions to environmental pollution. Owing to the high risk of accidental and self-poisoning due to the use of OPs in agriculture, the threat of implementation of OPs in terrorist acts and in asymmetric conflicts, and environmental consequences of the extensive use of OPs in the world, it was important to review new approaches for the decontamination, remediation, and therapeutic means against these compounds.Several reviews about the use of OP-degrading enzymes in the fields of medicine and bioremediation were published in the past few years [14,[18][19][20][21].These fields are now so important that while our manuscript was under review, an article covering the same topics was published [22].Thus, this review updates our knowledge, exposes basic concepts and problems, and explores the effectiveness of using OP-degrading enzymes from micro-organisms in prophylaxis and treatment of OP poisoning and treating OP contamination for sustainable environmental management.Furthermore, the promising prospect of employing enzyme-containing nanoparticles for medical purposes and to mitigate OP contamination of actual and synthetic aqueous effluents are also explored and recent achievements are reported. Acetylcholinesterase (AChE; EC. 3.1.1.7)plays a major role in the cholinergic system in terminating the action of the neurotransmitter acetylcholine.The related enzyme butyrylcholinesterase (BChE; EC.3.1.1.8)has a minor role in cholinergic system and its physiological functions are not well known, but it is of importance in pharmacology and toxicology in the degradation of drugs and scavenging OP and carbamate toxicants [2]. Figure 1 describes the minimum reaction scheme of ChE inhibition by OPs, postinhibition and reactivation.OP first binds reversibly to ChEs (step1).Then, after formation of this complex, the active site (serine, E-ÖH) is phosphylated.The phosphylation reaction is accompanied by release of the OP leaving group X-(step 2).X-can be a halide (F-) or an oxo/thio alkyl/aryl ion.It is important to note that unlike reactions of ChEs with ester substrates, water is too weak of a nucleophile for fast spontaneous reactivation of phosphylated ChEs.Thus, OPs can be regarded as pseudo-substrates of ChEs [2].Therefore, phosphylated enzymes can only be reactivated in a short time by strong nucleophilic agents, like oximate ions used as antidotes in emergency treatment of acute OP poisoning [24,25] (step 3).Then, post-inhibitory reactions may complicate the scheme: the phosphyl-ChE conjugate may undergo a spontaneous dealkylation (step 4) through alkyl-oxygen bond scission (this reaction is called "aging") [26].Aging causes irreversible inactivation (nonreactivatability) of phosphylated enzyme.Aging can be very fast (t 1/2 = 3 min at 37 • C for human AChE phosphonylated by soman, thus impairing practical reactivation in this case).The reactivation of aged ChEs has long been considered as a catch-22.However, drug-mediated reactivation of ChEs through the realkylation (+R 1 ) of aged ChEs (reverse reaction, in step 4 and subsequent step 3, called "resurrection" or "resuscitation") was recently demonstrated to be possible [27].However, the direct displacement of the aged adduct (step 5) leading to spontaneous enzyme reactivation is still impossible, but research continues to solve this difficult issue (?). Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 3 of 34 Acetylcholinesterase (AChE; EC. 3.1.1.7)plays a major role in the cholinergic system in terminating the action of the neurotransmitter acetylcholine.The related enzyme butyrylcholinesterase (BChE; EC.3.1.1.8)has a minor role in cholinergic system and its physiological functions are not well known, but it is of importance in pharmacology and toxicology in the degradation of drugs and scavenging OP and carbamate toxicants [2]. Figure 1 describes the minimum reaction scheme of ChE inhibition by OPs, post-inhibition and reactivation.OP first binds reversibly to ChEs (step1).Then, after formation of this complex, the active site (serine, E-ÖH) is phosphylated.The phosphylation reaction is accompanied by release of the OP leaving group X-(step 2).X-can be a halide (F-) or an oxo/thio alkyl/aryl ion.It is important to note that unlike reactions of ChEs with ester substrates, water is too weak of a nucleophile for fast spontaneous reactivation of phosphylated ChEs.Thus, OPs can be regarded as pseudo-substrates of ChEs [2].Therefore, phosphylated enzymes can only be reactivated in a short time by strong nucleophilic agents, like oximate ions used as antidotes in emergency treatment of acute OP poisoning [24,25] (step 3).Then, post-inhibitory reactions may complicate the scheme: the phosphyl-ChE conjugate may undergo a spontaneous dealkylation (step 4) through alkyl-oxygen bond scission (this reaction is called "aging") [26].Aging causes irreversible inactivation (non-reactivatability) of phosphylated enzyme.Aging can be very fast (t1/2 = 3 min at 37 °C for human AChE phosphonylated by soman, thus impairing practical reactivation in this case).The reactivation of aged ChEs has long been considered as a catch-22.However, drug-mediated reactivation of ChEs through the realkylation (+R1) of aged ChEs (reverse reaction, in step 4 and subsequent step 3, called "resurrection" or "resuscitation") was recently demonstrated to be possible [27].However, the direct displacement of the aged adduct (step 5) leading to spontaneous enzyme reactivation is still impossible, but research continues to solve this difficult issue (?).Then, irreversible inhibition of ChEs by OPs leads to the accumulation of acetylcholine in synapses and a blockade of cholinergic transmissions.Actually, the inhibition of AChEs in peripheral (ganglia and neuromuscular junctions) and central nervous systems is the main cause of the acute toxicity of OPs [23].This inhibition causes a major cholinergic syndrome.In addition, the irreversible inhibition of other hydrolases in the central nervous system and alkylation of different proteins plays a role in the sub-acute toxicity of OPs as well as in non-cholinergic toxicity, in particular long-term post-exposure effects Then, irreversible inhibition of ChEs by OPs leads to the accumulation of acetylcholine in synapses and a blockade of cholinergic transmissions.Actually, the inhibition of AChEs in peripheral (ganglia and neuromuscular junctions) and central nervous systems is the main cause of the acute toxicity of OPs [23].This inhibition causes a major cholinergic syndrome.In addition, the irreversible inhibition of other hydrolases in the central nervous system and alkylation of different proteins plays a role in the sub-acute toxicity of OPs as well as in non-cholinergic toxicity, in particular long-term post-exposure effects [28].Thus, phosphorylation of serine, tyrosine, lysine, and other residues in numerous proteins is also involved in the sub-lethal and chronic toxicity of OPs [29]. The medical counter measures of OP poisoning are imperfect.Prophylaxis means can partially mitigate the acute toxicity of OPs, oxime antidotal treatment of phosphylated ChEs does not work with certain OPs (either for steric reasons or due to fast aging reaction of phosphylated AChE), and symptomatic countermeasures are limited.Then, classical pharmacological approaches have reached their limits.Moreover, due to accumulation of OP molecules in depot sites (e.g., fat) and their subsequent slow release in the bloodstream, ChEs may remain inhibited for long periods of time.Therefore, the persistence of certain OPs in the body after exposure complicates treatments of acute poisoning.This has been known for a long time and is well documented for severe intoxications by parathion [30].Yet, in the past 20 years, significant progress has been made in emergency pharmacological treatments of OP poisoning and medical management of chemical casualties [31][32][33][34][35][36]. The use of OP-reacting enzymes to trap, neutralize, and degrade OPs was initially proposed as an alternative to classical pharmacological means.It was based on the observation that several endogenous enzymes and OP-reacting proteins present in the skin, blood, and liver react with OPs and are involved in natural defenses against OP toxicity.Indeed, the presence of detoxifying or scavenging enzymes, such as BChE, in skin contributes to reduce the concentration of OP that penetrates into the body [37].Liver enzymes play also an essential role in OP detoxification.In particular, glutathione S-transferases (GST, EC. 2.5.1.18)is involved in the degradation of alkyl/aryl chains of OPs [38], carboxylesterases (CaE; EC 3.1.1.1),other serine hydrolases, and irreversibly scavenge OPs after the phosphylation of their active site serine [39].Lastly, blood bioscavengers significantly contribute to reduce the amount of OP molecules reaching neuro-and neuro-muscular targets.Plasma enzymes and OP-reacting proteins play a major role in this natural defense.In particular, paraoxonase-1 (PON-1, EC 3.1.8.1), an endogenous phosphotriesterase (PTE), may hydrolyze certain OPs at high rate.It is well known that animals in which the plasma concentration in PON-1 and/or in CaE is high, like rabbits, are relatively resistant to OPs [40].Conversely, knockout mice for PON-1 are very sensitive to OPs [41].However, unlike the plasma of most model animals, human plasma does not contain carboxylesterases [42,43].However, human plasma contains BChE that effectively scavenges a fraction of OP molecules in the bloodstream.A special role is devoted to albumin.Albumin is the most abundant protein in plasma and lymph with a concentration close to 0.6 mM.It slowly reacts with esteryl-, carbamyl-, and phosphoryl-esters with a turnover.Albumin was shown to play a significant role in the detoxification of carbaryl at toxicologically relevant concentrations [44].Thus, albumin in lymph and plasma may also scavenge certain OPs and play a role in their detoxification [45,46].In addition, secondary OP targets present in various tissues participate in the neutralization of OP molecules.They also play a role in the natural protection of the cholinergic system [47].However, as mentioned, the inactivation of certain secondary targets is responsible for the sub-lethal and chronic toxicity of OPs.Thus, despite this last issue, most of endogenous OP-scavenging and hydrolyzing enzymes and secondary targets can be regarded as the first-line of defense against acute OP poisoning [48]. Sources of Organophosphate-Degrading Enzymes (Fungal, Bacterial and Archaeal Sources, Engineered Enzymes) Numerous studies have demonstrated the ability of micro-organisms to use OPs as a source of carbon (C), phosphorus (P), nitrogen (N), or sulfur (S).Other studies have proven that the degradation of OPs is possible through co-metabolism (the obligatory presence of a complementary substrate to provide the source of C and energy).In all cases, degradation is the result of the activity of enzymes secreted by the micro-organisms involved.Enzymatic catalysts capable of degrading OPs have been identified not only in microbial species, but also in eucaryotes like squid and mammals [49].Some examples are shown in Table 1. Organophosphate Degradation by Microbial Enzymes, Types of Enzymes and Mechanisms The methods employed to isolate OP-degrading enzymes depend on their location in the microbial cells.They include cell-disrupting methods such as using silica or glass beads, ultrasonication, etc., to isolate enzymes located intracellularly, and cell centrifugation, filtration, etc. for those with extracellular locations. Multiple enzymes are involved in microbial hydrolytic degradation of OPs (Figure 2).However, bacterial cytochrome P450s (BacCYPs) dearylate aryl-containing OP groups (R 1 , R 2 ) and play also a role in detoxification of these compounds. microbial species, but also in eucaryotes like squid and mammals [49].Some examples are shown in Table 1. Organophosphate Degradation by Microbial Enzymes, Types of Enzymes and Mechanisms The methods employed to isolate OP-degrading enzymes depend on their location in the microbial cells.They include cell-disrupting methods such as using silica or glass beads, ultrasonication, etc., to isolate enzymes located intracellularly, and cell centrifugation, filtration, etc. for those with extracellular locations. Multiple enzymes are involved in microbial hydrolytic degradation of OPs (Figure 2).However, bacterial cytochrome P450s (BacCYPs) dearylate aryl-containing OP groups (R1, R2) and play also a role in detoxification of these compounds.Phosphotriesterases (PTEs) are a group of enzymes that hydrolyze OPs.They are found in animals, micro-organisms, and plants.There are three different types of wellcharacterized PTEs: organophosphate hydrolase (OPH and OpdA), methyl parathion hydrolase (MPH), and organophosphorus acid anhydrolase (OPAA).The OP-degrading enzymes catalyze hydrolysis of either O-P, C-P, P-S, P-N or P-F bonds.OPs are broken down by enzymes through a nucleophilic attack on their phosphorus core.This attack is facilitated by two divalent metal ions, a water molecule, and reactive amino acids present in the enzyme's active site [49,58]. OPH effectively hydrolyzes organophosphate pesticides containing P-O, P-F, P-CN, and P-S bonds.Paraoxon, parathion, and diazinon are examples of OP insecticides containing P-O bonds that are efficiently hydrolyzed by OPH.It has been established that the optimal OP substrate for OPH is paraoxon [58].Mutagenesis can be used to further enhance the various OPH enzymes' capacity for hydrolyzing and detoxifying OPs [60]. PTEs are promiscuous enzymes.Their primary function is lactonase.Thus, these enzymes are now called phosphotriesterase-like lactonases (PLL).The lactonase activity plays a role in bacterial communication (quorum sensing) [65,66].Virulence and formation of biofilms are regulated by the concentration of lactones, the quorum sensing mediators, in the medium.Thus, the lactonase activity by hydrolyzing lactones acts as a quorum quencher, which in turn inhibits bacterial communication [67] and thus the formation of biofilms.The PTE activity is believed to have evolved from ancestral lactonases [68][69][70][71].Recent reshaping of the active center conformation and plasticity of an archaea PPL supports this theory [72]. Both OpdA and OPH belong to a broad family of enzymes that have a binuclear metal core and require two metal ions, such as Zn 2+ (OpdA) or Co 2+ (OPH), in the α and β sites for the hydrolytic reaction step.The coordinated two metal ions in OpdA engage with a hydroxide ion or water molecule as well as a carboxylated lysine residue (Lys169).Although 90% of the OPH identity is shared by the OpdA enzyme, there are some changes in substrate selectivity and kinetic behavior between them, according to homology studies.The most significant amino acid sequence differences between OpdA and OPH are as follows: (a) distinct residues in the active site, which are identified in OPH and OpdA as His254/Arg254, His257/Tyr257, and Leu272/Phe27, respectively; (b) 20 additional amino acids at OpdA C-terminus appear to be unimportant for catalysis because they are situated far away from the active site; (c) a complicated hydrogen bond network in OpdA allows two (Tyr257 and Arg254) of the three amino acid residues, close to the active site, to play a significant role in modifying the catalyzed reaction.In OPH, these hydrogen bonds are not as important [76]. The most well-known enzyme is Brevundimonas diminuta PTE.It is a 72 kDa dimeric enzyme.Zn 2+ is involved in the catalytic process [77].The substitution of the native Zn cations in the active site with Mn, Co, Ni, or Cd cations results in almost full retention of catalytic activity.Following the first determination of the 3D structure of Brevundimonas diminuta PTE [78] (Figure 3A), a series of crystal structures, kinetic, and spectroscopic studies were reported.The oxygen atom seen in X-ray structures, coupled with two metal cations (Figure 3B), is thought to be in a hydroxyl form because the structure is pHdependent and the protonation of hydroxyl leads to the loss of coupling [79].The catalytic mechanism of PTEs is still debated and the functional roles of divalent cations and amino acids in the active center of these enzymes are not yet completely understood [64,[80][81][82][83][84][85]. cations in the active site with Mn, Co, Ni, or Cd cations results in almost full retention of catalytic activity.Following the first determination of the 3D structure of Brevundimonas diminuta PTE [78] (Figure 3A), a series of crystal structures, kinetic, and spectroscopic studies were reported.The oxygen atom seen in X-ray structures, coupled with two metal cations (Figure 3B), is thought to be in a hydroxyl form because the structure is pH-dependent and the protonation of hydroxyl leads to the loss of coupling [79].The catalytic mechanism of PTEs is still debated and the functional roles of divalent cations and amino acids in the active center of these enzymes are not yet completely understood [64,[80][81][82][83][84][85].The catalytic mechanism proposed by Bigley and Raushel [64,87] for the Brevundimonas diminuta enzyme is the most accepted (Figure 3C).It states that PTE-catalyzed hydrolysis of OP results from a direct attack of the hydroxyl-group bridging divalent metal cations on the P atom.As a result, the formation of products is accompanied by the inversion of the phosphorus atom stereo-configuration.The hydrolysis product is bound to cations in a bidentate manner.Surrounding active center residues have a role in accepting proton from the hydroxyl-group upon formation of the negatively charged reaction product.Kinetic [79], crystallographic [88], electron paramagnetic resonance spectroscopy [83], NMR [89], and computational chemistry studies [84,90] support this mechanism.Lessons from eucaryotic PTEs, PON-1, and DFPase contributed to solve the puzzling mechanism of bacterial and archaeal PTEs.The catalytic mechanism of squid DFPase (DFP is diisopropyl fluorophosphate), a calcium-dependent PTE [64,91] was first proposed, involving a calcium-coordinated aspartate as the nucleophile pole to attack the phosphorus atom.However, a more realistic mechanism was proposed.In this mechanism a water molecule is activated, leading to a hydroxide ion prone to attack the phosphorus center [92].This scheme is consistent with the general mechanism proposed for all PTEs: mammalian (PON-1) [93,94], PTEs [80], and PLLs [69,87] (Figure 3C). Mechanisms of OP degradation by Opdh were also proposed.Mali et al. [53] studied the degradation of chloropyrifos by the opdh of Arthrobacter sp.HM01 and proposed two possible mechanisms.The first mechanism generates TCP (3,5,6-trichloro-2-pyridino) that will be successively transformed into DHP (2,6-di-hydroxy-pyridine), malic acid, and pyruvic acid.The latter product will be able to integrate the TCA (tri-carboxylic acid cycle).The catalytic mechanism proposed by Bigley and Raushel [64,87] for the Brevundimonas diminuta enzyme is the most accepted (Figure 3C).It states that PTE-catalyzed hydrolysis of OP results from a direct attack of the hydroxyl-group bridging divalent metal cations on the P atom.As a result, the formation of products is accompanied by the inversion of the phosphorus atom stereo-configuration.The hydrolysis product is bound to cations in a bidentate manner.Surrounding active center residues have a role in accepting proton from the hydroxyl-group upon formation of the negatively charged reaction product.Kinetic [79], crystallographic [88], electron paramagnetic resonance spectroscopy [83], NMR [89], and computational chemistry studies [84,90] support this mechanism.Lessons from eucaryotic PTEs, PON-1, and DFPase contributed to solve the puzzling mechanism of bacterial and archaeal PTEs.The catalytic mechanism of squid DFPase (DFP is diisopropyl fluorophosphate), a calcium-dependent PTE [64,91] was first proposed, involving a calciumcoordinated aspartate as the nucleophile pole to attack the phosphorus atom.However, a more realistic mechanism was proposed.In this mechanism a water molecule is activated, leading to a hydroxide ion prone to attack the phosphorus center [92].This scheme is consistent with the general mechanism proposed for all PTEs: mammalian (PON-1) [93,94], PTEs [80], and PLLs [69,87] (Figure 3C). Mechanisms of OP degradation by Opdh were also proposed.Mali et al. [53] studied the degradation of chloropyrifos by the opdh of Arthrobacter sp.HM01 and proposed two possible mechanisms.The first mechanism generates TCP (3,5,6-trichloro-2-pyridino) that will be successively transformed into DHP (2,6-di-hydroxy-pyridine), malic acid, and pyruvic acid.The latter product will be able to integrate the TCA (tri-carboxylic acid cycle).The second proposed mechanism generates DETP (di-ethyl-thio-phosphoric acid) that is subsequently transformed into phosphoric acid and will also integrate TCA. OpdA, a variant of the OPH enzyme, is the only enzyme that is commercially used to bioremediate and clean up pesticide-contaminated water sources.OpdA is encoded by the opdA gene, obtained from Agrobacterium radiobacter.It can hydrolyze a wide range of OP pesticides [58].Although the secondary structures of OpdA and OPH are similar, their active site structures are different, resulting in different substrate specificities.Typically, OpdA chooses substrates with fewer alkyl substituents.It may cleave substrates into phosphate ions and alcohols [49,51]. The catalytic efficiency (k cat /K m ) of Brevundimonas diminuta PTE for hydrolysis of paraoxon, the model substrate, is approaching the diffusion-controlled limit (2 × 10 9 M −1 min −1 [95]).However, it is rather slow against malaoxon.Then, rational engineering of the enzyme allowed it to greatly improve its catalytic efficiency against malaoxon up to (k cat /K m = 4.6 × 10 5 M −1 min −1 ) [96].The catalytic activity of the wild-type enzyme is also slow against CWA (e.g., 6 × 10 5 M −1 min −1 against soman [97]).However, directed evolution of the enzyme showed that only three amino acids change dramatically and enhance the catalytic efficiency for an analog of soman by ~3 orders of magnitude [98].Further studies combining rational design and directed evolution led to the selection of mutants from randomized libraries.The catalytic activity of these mutants against Sp enantiomers of nerve agent analogs and racemic real nerve agents was greatly improved [99,100].This rational design approach led to multiple mutants with k cat /K m up to 4 orders of magnitude higher than that of wild-type PTE against V agents [87,101,102].A study with the last designed mutants proved that in vivo detoxification of VX is possible [102].A theoretical study suggested that enzymatic hydrolysis of novichok agents is also possible [103] and, indeed, a recent work showed that multiples mutants of Brevundimonas diminuta PTEs may degrade these OPs [104].However, enzymatic fast hydrolysis of phosphoramidates like novichoks is a challenge, owing to electron delocalization along the P-bonded amidine chain, thus preventing the effective nucleophilic attack of water on the phosphorus atom. Numerous studies highlight the potential of Brevundimonas diminuta PTE for surface decontamination and skin protection [62,105,106].Administration of wild-type and mutants of this enzyme before or after OP exposure was shown to improve pharmacological pre-treatment and current treatment of OP intoxications [107].However, in order to prevent abnormally fast pharmacokinetics and/or immunological response due to the injection of a bacterial enzyme, PTE could be PEGylated [108] or encapsulated.In vivo assays with PTE encapsulated in murine erythrocyte ghosts were promising [109].Later, the encapsulation of PTE in liposomes provided protection of rats from multiple LD 50 of paraoxon [110].Blood detoxification through extracorporeal circulation devices, e.g., a cartridge containing immobilized PTE was proposed [111].However, storage and implementation of such devices are difficult under field conditions [112].Different formulations of PTEs were also evaluated for mild decontamination of mucous membranes and wounds as well as for skin protection in topical skin protectant creams or covalently coupled to the skin cornified layer [113].However, long-term stability of these formulations impairs their practical use so far. Brevundimonas diminuta PTE was also entrapped in additives and paints for surface coating.In particular, it was found to be effective in inhibiting quorum sensing and preventing formation of biofilms on different surfaces, including the hull of boats (https:// www.gene-greentk.com)(accessed on 14 July 2024).PTE-containing additives were shown to retain the catalytic properties and stability of enzymes [114].For the decontamination of the environment and remediation, phytodegradation of OPs by transgenic plants expressing a bacterial PTE has been considered as a potentially low-cost, effective, and friendly method [115].Chemical modification of enzymes may improve their catalytic properties.For example, His-tagged PTE [116] was reported to degrade numerous OPs, including VX, at a high rate.Since the enzyme is the wild-type PTE, it is suggested that the presence of the His tag plays a role in this high activity.Though neither the 3D structure nor molecular dynamics of the modified enzyme are available, it can be hypothesized that the His tag may increase the enzyme flexibility, which in turn should improve the enzyme capability to accommodate OP molecules and increase its catalytic activity. SsoPox, isolated from Sulfolobus solfataricus, has a high potential for the degradation of OPs [118].Mutants of SsoPox with significantly increased PTE activity were produced by genetic engineering.The SsoPox-αsD6 mutant is the most interesting of these mutated enzymes.Using an E. coli BL21(DE3)-pGro7/GroEL (TaKaRa, Shiga, Japan) chaperoneexpressing strain, it was cloned into a pET32b-∆trx plasmid and functionally expressed [55]. The 3D structures, evolution, stability and catalytic properties of several of these PLLs were determined [69,71,[119][120][121][122].These enzymes, wild-type and evolved mutants with improved catalytic efficiency (re-designed active center) against OPs, have been conveniently expressed in E.coli where their heat stability allows easy purification [71,120,[123][124][125]. Owing to the high thermal stability of these archaea enzymes, allowing long-term storage at temperatures above room temperature, fieldable uses for different purposes are possible.Moreover, the techniques of encapsulation in nanoparticles, involving heat processes, do not cause denaturation of these enzymes during preparation of nano-formulations. Other Bacterial Enzymes Reacting with OPs Other classes of hydrolases involved in the biodegradation of OPs have been discovered.Methyl parathion hydrolase (MPH) is a member of the β-lactamase superfamily.It is active against various OPs and is present in several phylogenetically distinct bacteria.However, its substrate range is smaller than that of OPH [126].Each monomer in this homo-dimeric enzyme contains a hetero-binuclear Zn 2+ /Cd 2+.Zn 2+ can be substituted by Co 2+ , Ni 2+ , and Mn 2+ , while Cd 2+ can also be substituted by Co 2+ , Ni 2+ , Mn 2+ , and Fe 2+ .There is currently little knowledge about the MPH reaction mechanism.Three hydrophobic pockets comprise the active site of MPH [50].Residues Leu65, Leu67, Phe119, Trp179, Phe196, Leu258, and Leu273 are part of the substrate binding pocket, while alanine substitutions of Phe196 and Leu273 enhanced enzymatic activity towards the substrate p-nitrophenyl diphenylphosphate, alanine substitutions of Phe119, Trp179, and Phe196 are detrimental to the catalytic activity towards methyl parathion [127]. Organophosphorus acid anhydrolase (OPAA) is also an important enzyme.With no structural or gene-sequence similarities to OPH or MPH (methyl parathion hydrolases), OPAA, encoded by the opaA gene, was shown to be a member of the dipeptidase family in Alteromonas undina and Alteromonas haloplanktis [58].In 1992, the International Union of Biochemistry named the P-F or P-CN bond-degrading enzymes as OPAA.OPAA is a single-chain polypeptide with a molecular weight of 58 kDa, acting in a temperature range of 10-65 • C (optimal at 40-55 • C), and in a pH range of 6.5-9.5 (optimal at 7.5-8.5)[59]. This metalloenzyme is a tetramer (a dimer of a dimer) harboring binuclear Mn 2+ ions in the active site.It can hydrolyze various OPs.In contrast to P-O or P-C bonds where the enzyme shows very little activity and P-S bonds that are resistant to hydrolysis, OPs with P-F bonds show a high degree of hydrolysis [19].The active site is located in an oval pocket in the β-sheet part of the C-domain.There are three pockets at its binding site: small, large, and leaving.The small pocket is lined with Try212, Val342, and His343, and capped with Asp45 at the N-terminal domain of the opposing dimer subunit.Tyr292 and Leu366 are found in the leaving pocket, whereas Leu225, His226, His332, and Arg418 are linked to the large pocket, which is capped by Trp89 from a different subunit.According to the suggested mechanism for OPAA enzymes, two manganese (II) ions have a hydroxide bridge between them, which initiates a nucleophilic attack on the phosphorus center, resulting in the production of a transient intermediate that subsequently departs with the leaving group [118]. In addition, we must mention the potential interest of other extremophile OP-reacting enzymes isolated from halophilic bacteria (Alteromonas), such as the OPAA (organophos-phorus acid anhydrolase), and from radio-resistant bacteria, Deinococcus radiodurans and Agrobacterium radiobacter.The 3D structure and catalytic mechanism of these enzymes were also determined and used for a structure-based random mutagenesis rational design to improve enzyme catalytic efficiency against OPs [128][129][130].Recent mutagenesis of OPAA generated new mutants against chemical warfare nerve agents (CWNAs); one of these mutants displayed the highest activity against soman [131].Mining in genomic databases also allowed the discovery of a new OP scavenging enzyme: esterase-2 from the planctomycetota hyperthermophilic bacterium Thermogutta terrifontis [132].Although this enzyme is not very effective, its catalytic activity could be improved, and it also demonstrates that mining research in databases is promising.Moreover, new and fast fluorimetric screening methods allow the identification of highly active PTEs in micro-organisms from various biotopes [133]. Bacterial ChEs have been known for long time [140].A 43 kDa AChE-like enzyme from Pseudomonas fluorescens was isolated [141].This enzyme displays a low sensitivity to OPs with a bimolecular rate constant of the order of 0.5 × 10 2 M −1 .min−1 with echothiophate and DFP (diisopropylfluorophosphate).The phosphorylated enzyme cannot be reactivated by oximes.More recently the 3D structure of a related AChE-like from Brevundimonas diminuta was solved [142].This 30 kDa enzyme has a α/β/α fold distinct from the α/β fold of eucaryotic ChEs and displays low sequence homologies with other ChEs.However, its catalytic triad resembles that of ChEs.Despite differences with eucaryotic ChEs, the functional convergence between eucaryotic and procaryotic ChEs could be exploited.In particular, knowledge of the 3D structure and molecular dynamics of AChE and BChE [143] opened a way to rational re-design of ChEs to OP hydrolases.In particular, the possibility to convert ChEs into an OP hydrolase (OPH) has been attempted [144].However, mutated enzymes display low OPase activity and the mechanism of dephosphylation of these mutants is still debated [145].Nevertheless, the computer-assisted design of new mutants of ChEs is conceivable."Intelligent" directed mutagenesis design based on the simulation of reaction mechanisms, modeling of intermediates and transition state structures with quantum mechanics (QM) and quantum mechanics/molecular mechanics (QM/MM) calculations along dephosphylation reaction coordinate may allow the design of highly active mutants.Thus, new molecular dynamic methods, using principal component analysis and Markov chain models could be implemented to explore reaction paths before construction of designed mutants.Application of these methods to bacterial ChEs is thought to speed up the process of mutant creation and to considerably decrease the cost of their functional expression. Bacterial CaEs can break down malathion by cleaving one or two carboxyl groups to produce mono-or di-acid derivatives, but OP-reacting properties of these enzymes have not been extensively explored.Also, fungal cutinase, a lipolytic enzyme exhibiting a high initial malathion degradation rate (approximately 60% in 30 min), can generate malathion monoacid and malathion diacid.Yeast esterase, a lipolytic enzyme obtained from Lysinibacillus sp.KB1, can degrade malathion and generates malathion dicarboxylic acid and malathion monocarboxylic acid [60]. Oxidases are also of interest, in particular laccases.Laccases (EC 1.10.3.2) are phenol oxidoreductases.Laccase from Pseudomonas sp.S2 produced in a bioreactor was found to oxidize OP pesticides in a short time [146].Moreover, phosphorothiolates (P-S bonded OPs) and phosphoramidates (P-N bonded OPs) are almost resistant to PTEs.Thus, the oxidative cleavage of the P-S and P-N bonds could be achieved by oxidases like laccases.These enzymes could be used in medical countermeasures in association with other OP-degrading enzymes.Though no work has been reported on the combined action of oxidases and hydrolases, the oxidation of P-bonded alkyl/aryl chains by oxidases is expected to alter the enantioselectivity of PTE for parent OPs.Therefore, biopharmaceutical formulations in which oxidases and PTEs are combined may improve the efficiency of catalytic bioscavengers.Nevertheless, bacterial laccases could at least be used for decontamination and environment remediation [147].Moreover, enzymes that degrade OPs are of interest for the destruction of CW stockpiles, decontamination of materials and protective equipment, and water polluted by pesticides and CW OPs [148]. Chloroperoxidase, a fungal peroxidase from Caldariomyces fumago is capable of converting OP insecticides that have phosphorothioate group (P=S).However, the oxidized products were found to be oxon (P=O) derivatives, in which an oxygen atom has taken the place of the sulfur atom from the thioate group.These oxon forms are more hazardous than the parent insecticide [57].There are also several other enzymes, such as aldehyde oxidase, esterase, glutathione S-transferase, reductive dehalogenase, dioxygenase, aminopeptidase, nitroreductase, laccase, and peroxidase, that are produced by several microbes and have been reported to degrade a variety of OP pesticides [149].However, like chloroperoxidase and cytochromes P450, they may lead to more toxic OPs. Protein engineering has made extensive use of these techniques as a standard method for improving protein function by chemical modification or protein fusion.For instance, it was interesting to discover that, while hydrolyzing parathion and methyl parathion, an N-terminal dodecahistidine tag (His12-OPH) increased catalytic efficiency by 30 and 74 times, respectively, compared to wild-type OPH.It was also intriguing to learn that by lengthening the polyhistidine tag from six to twelve His residues, the ideal pH of the fused OPH could be progressively moved to the alkaline range.Furthermore, decreased thermostability at temperatures below 50 • C and increased thermostability at temperatures above 50 • C were caused by the tendency of His12-OPH to oligomerize.OPH fusion with alternating glutamic acid and lysine sequences (EK) of 30 kDa at the C-terminus was another intriguing occurrence.When compared to wild-type OPH, the fusion disrupts the formation of OPH dimer and produces a stable monomeric OPH that exhibited a modest increase in thermostability and a 70% increase in substrate affinity by reducing K m [50]. Research, isolation, and engineering of microbial enzymes capable of neutralizing OP either as stoichiometric or catalytic bioscavengers have been undertaken in collections of known micro-organisms, natural environments polluted by OPs, extreme biotopes, and mining in protein databases DNA sequences [132,150].Finally, mining of these enzymes and DNA sequences of interest, mutagenesis and functional expression in simple bacterial hosts (e.g., E coli), and alternatively, engineering (computer design and/or directed evolution) of known enzymes capable of degrading OPs are the most promising short-term research fields to obtain effective enzymes of interest. The Medical Bioscavenger Concept Lessons from endogenous OP-reacting enzymes and proteins show that the acute toxicity of OPs can be countered by dramatically lowering OP concentrations in the blood compartment.This can be achieved by trapping/inactivating OP molecules on the skin and exposed mucus membranes, and in the bloodstream.The neutralization of OP molecules prevents their transfer to cholinergic synapses (peripheral cholinergic system nodes, central nervous system and neuromuscular junctions) and other biological targets (Figure 4). The concept of bioscavenger and developments of this medical approach in prophylaxis and post-exposure treatment of OP poisoning have been covered in several reviews [151][152][153][154]. toxicity of OPs can be countered by dramatically lowering OP concentrations in the blood compartment.This can be achieved by trapping/inactivating OP molecules on the skin and exposed mucus membranes, and in the bloodstream.The neutralization of OP molecules prevents their transfer to cholinergic synapses (peripheral cholinergic system nodes, central nervous system and neuromuscular junctions) and other biological targets (Figure 4).ChEs are the main targets (see Figure 1).Reaction of OPs with secondary targets (carboxylesterases, serine-amidases, peptidases, and other proteins) may be responsible for non-cholinergic sub-lethal effects of OPs and chronic toxicity at low dose exposure.(Adapted from [112]). The concept of bioscavenger and developments of this medical approach in prophylaxis and post-exposure treatment of OP poisoning have been covered in several reviews [151][152][153][154]. The use of bioscavengers is the most effective alternative approach for the neutralization or detoxification of OPs and surface decontamination under mild conditions, pretreatment or prophylaxis, and post-exposure treatment of OP poisoning.The administration of bioscavengers by i.v or i.m leads to the neutralization of toxic molecules in the bloodstream before they reach physiological targets, thus providing protection against poisoning. First-generation bioscavengers are stoichiometric enzymes that react mole-to-mole with OPs.However, considering the molecular mass ratio bioscavenger/OP, stoichiometric neutralization of OPs needs the administration of huge amounts of costly biopharmaceuticals [155].The second generation of bioscavengers or catalytic bioscavengers are enzymes using OP as substrates.They neutralize OPs with a turnover and therefore need to be administered at much lower doses than stoichiometric bioscavengers for the same efficacy [156].Catalytic bioscavengers could also be introduced in protective topical creams.Thus, the introduction of catalytic bioscavengers in protective devices and as ChEs are the main targets (see Figure 1).Reaction of OPs with secondary targets (carboxylesterases, serine-amidases, peptidases, and other proteins) may be responsible for non-cholinergic sub-lethal effects of OPs and chronic toxicity at low dose exposure.(Adapted from [112]). The use of bioscavengers is the most effective alternative approach for the neutralization or detoxification of OPs and surface decontamination under mild conditions, pretreatment or prophylaxis, and post-exposure treatment of OP poisoning.The administration of bioscavengers by i.v or i.m leads to the neutralization of toxic molecules in the bloodstream before they reach physiological targets, thus providing protection against poisoning. First-generation bioscavengers are stoichiometric enzymes that react mole-to-mole with OPs.However, considering the molecular mass ratio bioscavenger/OP, stoichiometric neutralization of OPs needs the administration of huge amounts of costly biopharmaceuticals [155].The second generation of bioscavengers or catalytic bioscavengers are enzymes using OP as substrates.They neutralize OPs with a turnover and therefore need to be administered at much lower doses than stoichiometric bioscavengers for the same efficacy [156].Catalytic bioscavengers could also be introduced in protective topical creams.Thus, the introduction of catalytic bioscavengers in protective devices and as medical counter-measures against OP poisoning considerably improves the efficacy of prophylaxis and post-exposure treatments. Stoichiometric Bioscavengers Starting from the end of the 1980s, research on bioscavengers mostly focused on enzymes that specifically react with OPs, in particular human BChE.Later, human BChE has proved to be an effective stoichiometric bioscavenger for pre-and post-exposure treatment of OP poisoning by pesticides and CWA [157][158][159].Among secondary targets of OPs, albumin is certainly one of most interesting proteins.Owing to the high number of albumin residues that covalently bind OPs (5 tyrosines and 2 serines) [160], it may be hypothesized that the reactivity of these residues could be enhanced by genetic engineering and/or upon specific chemical modification.New reacting residues could also be made by site-directed mutagenesis.Thus, engineered albumins is thought to lead to novel stoichiometric bioscav-engers.However, the conversion of albumin into a catalytic bioscavenger is presently unrealistic because this would imply the increase of the catalytic efficiency (k cat /K m ) by several orders of magnitude. The main limitation of stoichiometric bioscavenger is the cost because huge doses of enzymes have to be administered for challenging the OP molecules [155] without inducing unwanted side reactions.A way to circumvent the dose limitation is to in reactivate the administered enzyme in vivo after reaction with OP molecules, turning the stoichiometric bioscavenger into a pseudo-catalytic bioscavenger. Pseudocatalytic Bioscavengers Because OPs phosphylate the active site serine of ChEs (Figure 1, reaction 2), OPs may be regarded as pseudo-substrates of ChEs [2].When ChEs react with substrates, i.e., carboxyl-esters, there is a rapid turnover: after formation of the Michaelian complex, the acyl-enzyme intermediate is transiently formed, and then the acyl group is rapidly displaced by a water molecule acting as a co-substrate.On the contrary, in the case of OPs, because of the stereochemistry of the phosphyl-enzyme intermediate (Figure 1, reaction 2), the accessibility of water for attacking the phosphorus atom is restricted, and the enzyme remains phosphylated, i.e., irreversibly inhibited.However, certain ChE mutants not susceptible to age after phosphylation can be reactivated by nucleophilic agents (Figure 1, reaction 3).For instance, the human AChE double mutant Y337A/F338A [161] in the presence of oxime is reactivated and acts as a pseudo-catalyst in displacing the OP group bound to the active site serine.Such a mutated enzyme coupled to a reactivator could behave like a pseudo-catalytic bioscavenger [162].A first practical realization of such a self-reactivating system was reported by [163].The authors made a polymer-oxime-BChE macro-conjugate capable of reacting with OPs with the subsequent slow self-reactivation of the enzyme due to the associated multiple oxime moieties. However, the practical efficiency of pseudo-catalytic bioscavengers in vivo requires the implementation of new oximes, displaying higher affinity for phosphylated ChEs, higher reactivation constant (k r ), and long residence time in the bloodstream.Moreover, the pharmacokinetic profiles of both enzymes and reactivators must be similar.The enzymes can be chemically modified for long residence times in the bloodstream.However, the clearance of oximes in blood is, in general, fast.To circumvent pharmacokinetic issues, oximes can be either encapsulated into nanocontainers for slow release, and thus, prolonged action in the bloodstream [112] or both enzyme and oximes can be co-encapsulated into the circulating enzyme nanoreactors, where coupled reactions of bioscavenger phosphylation and subsequent oxime-mediated reactivation of phosphylated enzymes take place [164]. Catalytic Bioscavengers Catalytic bioscavengers are enzymes or catalytic antibodies capable of degrading OPs with a turnover (k cat ).These catalysts detoxify OPs by hydrolyzing phosphoester bonds.Organophosphorus acid anhydride hydrolases (OAAH), OP hydrolases (OPH, OPase), phosphotriesterases (PTE) and prolidases that catalytically hydrolyze OPs, as we said, can be used as catalytic bioscavengers.Other enzymes, like oxidases, lead to less toxic compounds by degrading their alkyl/aryl chains through oxidation.Several reviews deal with catalytic bioscavengers, in particular those of bacterial origin [87,[165][166][167].Thus, the catalytic bioscavenger concept is based on the idea of continuous degradation of OP substrates with a turnover after the administration of these enzymes.As for stoichiometric bioscavengers, these enzymes act in the bloodstream and neutralize OPs before toxic molecules reach physiological targets.Then, prophylactic injection of enzymes capable of hydrolyzing OP quickly (alone or in association with current prophylactic countermeasures), would allow workers and specialized personnels to operate safely in contaminated environments or to provide medical assistance to contaminated casualties under safe conditions.Administration of catalytic bioscavengers to poisoned casualties is also expected to greatly improve the efficacy of classical pharmacological countermeasures [168][169][170].In addition, catalytic bioscavenger formulations could be implemented for skin protection in nanoformulations [171], and in decontaminating solutions for body decontamination [105,106].For examples, a few practical enzyme formulations for OP decontamination have been marketed so far, e.g., VesuTOX (www.gene-greentk.com).Genetically engineered bacteria producing OP hydrolases can also be used for decontamination of water effluents as well as for purification of contaminated water before recycling or washing up in the environment [172].At this point, medical applications of catalytic bioscavengers merge with environmental applications for remediation. Requirements in Medicine for Efficacy and Safety of Injected Bacterial Enzymes The general requirements for the medical uses of OPs-degrading enzymes against OP poisoning are as follows: (1) enzymes must effectively react with a broad spectrum of OP molecules (the association of cocktails of several enzymes, displaying different specificity towards OPs also enlarges the spectrum and may lead to multiple enzyme formulations toward several toxicants and for multipurpose uses) and ideally, these enzymes must display enantioselectivity for the most toxic OP stereoisomers; (2) the enzymes must not induce iatrogenic effects after injection or in topical application; (3) mass production of highly purified, free of detectable contaminant, sterile wild-type and mutant enzymes under good manufacturing practice conditions must be realizable at reasonable cost.For instance, the cost of one dose (200 mg) of the first generation bioscavenger, human plasma-derived BChE was estimated to be higher than USD 20,000.This high cost considerably limited the practical interest of stoichiometric bioscavengers for medical treatments.In the case of catalytic bioscavengers of microbial origin, the injection of much lower doses of enzymes (e.g., <10 mg) with high catalytic efficiency (k cat /K m > 10 6 M −1 min −1 ) leads to much lower costs.The acceptable cost must be about USD 50 per dose); (4) long storage (several years) without activity loss (either in lyophilized form, in solution or adsorbed/bound on a matrix, in a gel or in a foam) must be feasible.This issue is mandatory, owing to the cost of enzyme production.The conformational stability of enzymes is an issue, in particular for storage and field uses.It can be increased by chemical modifications or the addition of stabilizers likes polyols, e.g., threalose.Otherwise, the use of thermostable PLL-PTEs from hyperthermophilic bacteria/archaea [69,75] expressed in E. coli or mutated/evolved highly stable enzymes from mesophilic bacteria are alternatives. Currently, several nanotechnological strategies are known for implementing bacterial OP-degrading enzymes for a medical purpose (Figure 5).Immobilization strategies, involving adsorption, covalent binding or copolymerization, and preparation of nanocomposites, Metal−Organic Frameworks (MOFs) [173,174], and silica nanoparticles [175] are the first solution.At the same time, traditional delivery systems for encapsulating bacterial enzymes such as recombinant PTEs, organophosphorus hydrolase using red blood cells [176], sterically stabilized liposomes [177][178][179][180], poly(2-ethyloxazoline)-based core shell dendritic polymer micelles [181], capsules [175,182], and nano-complexes [183] as vehicles can be developed.In the case of immobilization and encapsulation, there is a decrease in effectiveness of injected enzymes due to the diffusion through membrane or pores of encapsulating material and rapid clearance by the immune system.These solutions have other limitations for biomedical applications; one of the main concerns is biosafety (biocompatibility, iatrogenicity and long-term toxicity, immunogenicity, and pharmacokinetic issues (distribution, absorption, excretion and metabolism) [184,185].There are few examples in the literature where the catalytic efficiency of immobilized enzymes is maintained or increased compared to the efficiency of free enzymes in buffers [173,186].Therefore, recently, to improve the effectiveness of nano-therapeutic drugs, a new alternative approach was developed: the creation of biomedical robotic nanodevices for detoxification.Unlike traditional passive nanotherapeutics, nanodevices can perform various complex biomedical functions in the event of unexpected biological events [187,188].Typically, traditional drug delivery systems aim to encapsulate therapeutic agents and release them into target tissues under the control of external stimuli.In contrast, nano-detoxifying devices, which are one or multicompartment devices with a size close to 100 nm, remove drugs and xenobiotics from biological tissues [189,190].Optimizing physicochemical parameters such as the size and ratio of functional components, biocompatible broad-spectrum polymer nano-antidotes can facilitate rapid advancement into clinical uses [191].Among them we must mention nano-sponges [192][193][194], nano-scavengers [195][196][197], and nanoreactors [198,199].Usually, nanoreactors are two-phase systems [200] such as vesicles, polymersomes, proteinsomes, and capsosomes.They found application as mimics of organelles and living cells [201].The large surface area of these nano-compartments promotes faster reaction rates compared to bulk materials with immobilized enzymes.The probability and efficiency of reaction increase due to the spatial limitation of reaction mechanisms and reagents getting inside and interacting with encapsulated enzyme(s).In addition, biocatalytic reactions can proceed with higher selectivity or fewer side reactions in a confined space.OPs for medical purposes.The dose-response plot shows the prophylactic and post-exposure treatment efficacy of a PTE-nanoreactor in mice challenged with paraoxon [198,199]. Specific efficacy requirements depend on the way of administration, delivery system, or pharmaceutical formulation.Enzymes can be injected intravenously, intramuscularly, subcutaneously, or administered via the intranasal way.For the optimal efficacy of administered enzymes, acting as catalytic bioscavengers, knowledge of the toxicant concentration profile versus time in blood, i.e., its toxicokinetics is very useful for optimizing therapeutics.In most cases, it is difficult to determine.Otherwise, the determination of [OP]t, at fixed times after t0 (t0, exposure time) and determination of its irreversible inhibitory action on a reporter enzyme, e.g., BChE, must be considered [204].It should be noted that even in the most severe case of poisoning, [OP] is always very low.For example, the concentrations of sarin in serum of casualties after the Matsumoto and Tokyo chemical attacks in 1994 and 1995 were estimated between 1.5 and 30 nM, 14 h after poisoning [205].Thus, in human plasma, [OP] << Km of catalytically competent enzymes react with OPs as substrates.Therefore, under such reaction conditions, the kinetics for enzymatic hydrolysis of OPs in blood is always first-order [111,156] so that the simple Michaelis-Menten rate (v) equation reduces to Equation (1): In this equation, the product of the bimolecular rate constant (kcat/Km) and the enzyme active site molar concentration ([E]) is the first-order rate constant (expressed in min −1 ).Therefore, the enzyme dose to be injected for degradation of the toxicant in a very short time depends on the enzyme catalytic efficiency, i.e., kcat/Km.The higher this parameter, the lower the enzyme dose to be administered.The enzyme concentration needed to drop the OP concentration to a non-toxic concentration in time t, as short as possible, is: OPs for medical purposes.The dose-response plot shows the prophylactic and post-exposure treatment efficacy of a PTE-nanoreactor in mice challenged with paraoxon [198,199]. The use of enzymes in effective enzyme nanoreactors for prophylaxis and postexposure treatment of paraoxon poisoning illustrates the interest for such encapsulated enzyme systems: it significantly reduces mortality and intoxication symptoms [198,199], improves tolerance to poison and attenuates oxidative stress and organ damage [196,202], and penetrates the BBB to eliminate intracerebral OP molecules, thus impairing/limiting oxidative stress, neuroinflammation, and neuronal apoptosis of neurocytes [203].This thereby demonstrates the unique functionality of these biomedical nanodevices. Specific efficacy requirements depend on the way of administration, delivery system, or pharmaceutical formulation.Enzymes can be injected intravenously, intramuscularly, subcutaneously, or administered via the intranasal way.For the optimal efficacy of administered enzymes, acting as catalytic bioscavengers, knowledge of the toxicant concentration profile versus time in blood, i.e., its toxicokinetics is very useful for optimizing therapeutics.In most cases, it is difficult to determine.Otherwise, the determination of [OP]t, at fixed times after t 0 (t 0 , exposure time) and determination of its irreversible inhibitory action on a reporter enzyme, e.g., BChE, must be considered [204].It should be noted that even in the most severe case of poisoning, [OP] is always very low.For example, the concentrations of sarin in serum of casualties after the Matsumoto and Tokyo chemical attacks in 1994 and 1995 were estimated between 1.5 and 30 nM, 14 h after poisoning [205].Thus, in human plasma, [OP] << K m of catalytically competent enzymes react with OPs as substrates. Therefore, under such reaction conditions, the kinetics for enzymatic hydrolysis of OPs in blood is always first-order [111,156] so that the simple Michaelis-Menten rate (v) equation reduces to Equation ( 1 In this equation, the product of the bimolecular rate constant (k cat /K m ) and the enzyme active site molar concentration ([E]) is the first-order rate constant (expressed in min −1 ).Therefore, the enzyme dose to be injected for degradation of the toxicant in a very short time depends on the enzyme catalytic efficiency, i.e., k cat /K m .The higher this parameter, the lower the enzyme dose to be administered.The enzyme concentration needed to drop the OP concentration to a non-toxic concentration in time t, as short as possible, is: In Equation ( 2), X is the factor by which [OP] is reduced in time t (X = Ln([OP] 0 /[OP] t ).Owing to the fast flow rate of blood circulation in human, with the average time of 1 min for a complete cycle, X must be estimated per minute.The X value must be high to prevent transfer of highly toxic OP molecules from blood compartment to nervous system targets.Because the cost of enzymes is still a limiting factor, [E] cannot be dramatically increased.Thus, this is the catalytic efficiency that must be optimized.As written above, k cat /K m and the stereospecificity of enzymes can be increased by several orders of magnitude by site-directed mutagenesis, directed evolution, or chemical engineering [98,100,167,206,207]. Engineering strategies to increase k cat /K m have been theorized [208].The implementation of artificial intelligence algorithms is expected to soon lead to potent computer-designed enzyme mutants. In the case of enzyme nanoreactors, where kinetics of degradation take place under second order conditions ([E] > [OP]), k cat /K m has also to be as high as possible [164] for inactivation in a very short time.Moreover, in Equation (2), it is assumed that the operational stability of the administered enzyme has been optimized so that [E] in the bloodstream must not decrease too rapidly during the time course of the reaction with OP molecules in blood.In fact, [E] must be maintained as high as possible for a long time. [E] is controlled by pharmacokinetics/pharmacodynamics and the frequency of repeated administrations of the enzyme preparation (sustained pharmacokinetics).Increasing the size of the enzyme by polymerization, conjugation to other proteins (e.g., albumin, antibody fragments) or to biodegradable polymers, and chemical modifications ("capping" of solventexposed surface) improve the operational stability, i.e., the residence time in blood of injected enzymes.It must be noted that fast clearance of bacterial enzymes may result from their small size [209].Enzyme clearance can be slowed down either by chemical modifications such as PEGylation, polysialylation, and other conjugations, e.g., to dextran or other macromolecules, including proteins like albumin.Also, as for other detoxifying enzymes [210], nanoencapsulation of enzymes into nanocarriers ("nanoscavengers") may greatly increase their residence time in the bloodstream [195] and suppress potential adverse effects such as immuno-reactivity.However, the possible partial encapsulation of large molecules such as enzymes in nanocontainers, thus forming a "corona", may impair the advantage of nanoencapsulation. Catalytic properties of membrane-bound or membrane-anchored enzymes can deteriorate [211,212] and depend on curvature, molecular density, packing defects, and thickness of membrane [213].Enzymatic activity is associated with the availability of substrate for the enzyme located in the membrane and will be maintained in the case of a favorable orientation of enzyme on the surface of nanostructures [214].Enzymes embedded within the nanoreactors or surface localization will complicate the formation of a protein corona on surface of nanoparticles (Figure 6).riorate [211,212] and depend on curvature, molecular density, packing defects, and thickness of membrane [213].Enzymatic activity is associated with the availability of substrate for the enzyme located in the membrane and will be maintained in the case of a favorable orientation of enzyme on the surface of nanostructures [214].Enzymes embedded within the nanoreactors or surface localization will complicate the formation of a protein corona on surface of nanoparticles (Figure 6).The "protein corona" is a natural protein layer that spontaneously forms around nanomaterials in biological environments due to interaction with proteins, lipids, and sugars, acquiring new physicochemical properties.The formation of a protein corona changes the biological characteristics of nanoparticles, such as accumulation in tissues, cellular uptake, clearance by the immune system, and toxicity.It is extremely important to identify the type of proteins adsorbed on the surface of nanoparticles.Because, depending on their type, these proteins can either shorten or lengthen the circulation time of nanoparticles in The "protein corona" is a natural protein layer that spontaneously forms around nanomaterials in biological environments due to interaction with proteins, lipids, and sugars, acquiring new physicochemical properties.The formation of a protein corona changes the biological characteristics of nanoparticles, such as accumulation in tissues, cellular uptake, clearance by the immune system, and toxicity.It is extremely important to identify the type of proteins adsorbed on the surface of nanoparticles.Because, depending on their type, these proteins can either shorten or lengthen the circulation time of nanoparticles in the body [215].Therefore, characterization of this protein layer will be a decisive step in the development of new nanomedicines [216].At the same time, the understanding of the correlation between the physicochemical characteristics of nanoparticles and protein adsorption is improving [217].One of the first strategies is to reduce or prevent the formation of the protein corona using stealth systems.One of the most common surface modifications is PEGylation.Such systems also include polyvinylpyrrolidone, peptides, and carbohydrates.It is impossible to completely prevent protein corona, so the next strategy is modification with membrane components and to attach NP with specific ligands or biomolecules: antibodies, protein fragments, peptides, or membrane protein CD47 [218].It is important to note that the phenomenon of protein corona formation becomes more complex in the case of protein-surface or protein-protein interactions [219].Moreover, weak interactions stabilize NPs, and strong protein-protein interactions cause NP aggregation [220].The third step is the creation of biomimetic nanostructures or membrane coating technology of nanoparticles with plasma membranes of various cell types, e.g., erythrocyte, macrophage, and cancer cell membranes.The use of red blood cell-camouflaged nanoparticles increases circulation time and changes the pharmacokinetic profile and premature elimination from the body [221].Synthetic membrane engineering described for more than 2 × 10 6 BChE molecules bound to erythrocyte and provided the first line of defense against OP nerve agents exposure [222].Another example showed the cell membrane acting as an emulsifier to stabilize a nano-sized oil droplet during emulsification and simultaneously increased detoxification efficiency and trapping of OP molecules [202].Dual coating of MOF nanoparticles containing recombinant organophosphorus hydrolase with liposomal-lipid and erythrocyte membranes ensured not only the survival of mice after OP poisoning, but also biocompatibility, prolonged pharmacokinetics, and overcoming of the BBB [196,203].Despite the exciting literary results, comparison between these studies are still challenging, mainly due to the lack of standardization of the analysis of protein corona [223]. The administration of homologous enzymes does not induce immune response following a second injection [224].On the other hand, immunotolerance of injected heterologous enzymes is a major issue.Bacterial and archaeal enzymes may not be suitable for use in humans, but conjugation to dextran or polyethylene glycol may be sufficient to reduce antigenicity, non-specific immune response, and to slow down clearance following multiple injections [225,226].Nanoencapsulation of non-human enzymes allow it to cheat the immune system and to increase the residence time of enzymes in the bloodstream [195,210].However, enzyme-containing nanocontainers must be completely sealed to prevent leaks.This implies sophisticated design of decorated and crosslinked multilayer nanoparticles.Multicompartment structure allows the retention of therapeutic proteins and peptides for longer time.There are many techniques to create such structures.For example, multivesicular liposome (DepoFoam) technology can be used to develop prolonging therapeutic treatments and to reduce administration frequency [227,228].Multiple emulsions can be fabricated, using microfluidic devices [229].Polyelectrolyte multilayer nanoreactors and layer-by-layer (LbL) self-assemblies are found in the literature [230].Although the biological activity of molecules is retained [231] nonetheless, the integrating proteins and enzymes in LbL thin films is still challenging for nanomedicine aims [232].An alternative to the injection of enzymes is to incorporate OP-degrading enzymes in medical dialysis systems.This approach could greatly improve the efficiency of hemodialysis.In this respect, OP-reacting enzymes can be immobilized on dialysis cartridge membranes [233].Co-immobilization of different enzymes could be an easy way to extend the spectrum of OPs to be degraded.Accessibility of OP molecules to the enzyme active center must not be altered by the immobilization method or by matrix effects.[E]/surface has to be maximized to reduce diffusion constraints and increase the reactive surface.Again, k cat /K m has to be as high as possible and the flow rate must be reduced to increase the efficiency of the detoxification process.First-order degradation kinetics takes place under the particular conditions of immobilized enzymes in the continuous-flow system.The above-mentioned enzyme nanoreactor approach could be an alternative to extracorporeal immobilized enzyme-cartridges [112].Lastly, in situ transient production of enzymes should be possible by gene therapy in the future. For external uses, in decontaminant solutions and topical protectants, enzymes have to be highly concentrated and display high catalytic activity.The situation is similar to that of enzyme nanoreactors, where the concentration of encapsulated enzyme is high.In fact, enzyme nanoreactors could be embedded in creams or gels for making active topical skin protectants.A mixture of different enzymes should allow simultaneous detoxification of various OPs and other types of toxicants.In particular, exposure to multiple toxicants cannot be ruled out in worse chemical warfare scenarios.Lessons from previous conflicts and terrorist acts remind us. Requirements for Environmental Applications Presently, there is an increasing interest in the use of enzymes in pesticide bioremediation.By using enzymes for OP degradation, it is possible to develop more effective and sustainable methods for detoxifying contaminated environments.Additionally, enzyme engineering techniques can improve the activity and stability of these enzymes, increasing their effectiveness in degrading OPs.The use of OPs-degrading enzymes has been the focus of research since the last two decades or so. Fang et al. [13] demonstrated the degradation of the insecticide profenofos (10 mg/L) with the help of organophosphorus hydrolase (OpdB) from Cupriavidus nantongensis X1T and expressed in Escherichia coli BL21 (DE3), using the expression vector pET22b-opdB.The tested enzyme showed optimal degradation activities of 46% at 37 • C and 50.6% at neutral pH.Also, it was observed that divalent metal cations (Ni 2+ , Mg 2+ , Co 2+ , and Ca 2+ ) can increase the enzyme degrading activity, contrary to trivalent metal cations (Fe 3+ and Cr 3+ ), acting as strong inhibitors of the enzyme. The degradation of methyl paraoxon (2 mM) was studied using OP hydrolases (Opds) from four fungal species, namely Penicillium nalgiovense, Fusarium sp., Aspergillus niger, and Penicillium chrysogenum [235].The concentrated enzyme extracts from the fungal strains displayed methyl paraoxon degradation rates between about 38 and 80% after a 30-day reaction period in an acidic environment.The Opd from Penicillium chrysogenum with the best degradation performance (80%) exhibited optimal activity at 30 • C and strong stability, retaining 80% of its initial activity after 12 h when assayed with 22.5 µM methyl paraoxon under acidic conditions (pH 2) and in the presence of detergent (9.6% SDS).Moreover, the application of the enzyme in the bioremediation of apples contaminated with 8.5 mg/kg of methyl paraoxon gave a higher catalytic rate (6.2 nM/min at 30 • C, pH 2, and 9.6% SDS) when compared with the use of the genetically modified SsoxPox enzyme (5 nM/min at 25 • C and pH 7). In another study, PTEs from six bacterial strains (Arthrobacter oxydans ATCC 14358, Arthrobacter oxydans ATCC 14359, Nocardia asteroids ATCC 19296, Nocardia corynebacterioides ATCC 14898, Streptomyces setonii ATCC 39116, and Streptomyces phaeochromogenes CCRC 10811) were tested for their ability to break down different OPs (coroxon, paraoxon, methyl paraoxon, chlorpyrifos, methyl parathion, coumaphos, and dichlorvos).The PTE extracts from each bacterial strain were assayed separately in the presence of 0.15 mM of each OP for 21 days.Generally, enzyme extracts exhibited greater PTE activity compared to the entire cells.Interestingly, PTE from Arthrobacter oxydans ATCC 14359 achieved complete degradation of methyl parathion, and 80% and 82% paraoxon and coroxon hydrolysis, respectively, were recorded with PTE from Nocardia asteroids under optimized conditions (at 50 • C and pH 8) [236]. Role of Nanoparticles in Bioremediation Despite its advantages, such as environmental friendliness and cost effectiveness, the traditional microbial-based bioremediation approach has some serious drawbacks that may reduce its efficacy.These include (1) less effectiveness in cleaning up extensive and highly polluted sites; (2) reduced efficacy in removing heavy metals, radioactive residues, and persistent organic pollutants; (3) excessive dependence on a number of environmental variables, including temperature, pH, and the availability of nutrients and oxygen; and (4) poor adaptation of pollutant-degrading micro-organisms to the pollutant compounds may result in a reduction in their degradation performance and bioavailability at contaminated sites [237].In response to these constraints, a novel methodology known as "nano-bioremediation" has surfaced.This integrates biological processes with nanomaterials (non-biogenic or biogenic organic/inorganic nanoparticles) to achieve efficient, effective, and long-lasting remediation.They are manufactured solutions, utilizing several chemical processes (co-precipitation, co-reduction hydrothermal, and sol-gel) and biogenic methods, involving micro-organisms and plants [238].Nanoparticles have garnered significant interest in diverse domains, such as bioremediation, owing to their distinct physical and chemical characteristics.Nevertheless, although they possess some benefits, there are certain obstacles linked to their utilization, such as evaluating their environmental fate and their possible effects on ecosystems and human health, as well as minimizing the expenses related to their manufacturing and functionalization. Combining nanoparticles with bioremediation approaches has been shown to have synergistic benefits, with the combined strategy showing a more significant remediation effect than either method alone [239,240]. Combination of Nanoparticles and OP-Degrading Enzymes Several innovative supports and techniques have recently emerged to augment traditional enzymatic immobilization with the aim of improving enzymatic loading, activity, and stability to minimize the cost of enzymatic biocatalysts in bioremediation processes.These methods encompass cross-linked enzymatic aggregations, microwave-assisted immobilization, and combining nanoparticles with microbial, insect, or plant enzymes (peroxidases and laccases) to enhance the effectiveness of bioremediation for various types of contaminants, including OPs (Table 2).Nanoparticles are gaining particular interest because of their distinctive physical and chemical characteristics, which include, among others, a large surface area-to-volume ratio, great mechanical strength [241], and great colloidal stability [242]. Wang et al. [243] studied the ability of a bacterial (Pseudomonas aeruginosa) recombinant organophosphorus hydrolase attached to mesoporous silica nanoparticles that are covered with a zwitterionic polymer to degrade methyl parathion.The enzyme was expressed in E. coli Rosetta (DE3) containing the plasmid pET-20b.It was reported that because of the zwitterionic polymer, which permitted methyl parathion enrichment onto the fabricated system, the immobilized enzyme exhibited a very low K m value (0.09 mM) in comparison to its free configuration (0.34 mM).The immobilized enzyme showed a catalytic efficiency (k cat /K m ) of 17,367 s −1 mM −1 , which was 2.4 times greater than that of the free enzyme (7221 s −1 mM −1 ).Moreover, the immobilized enzyme was able to maintain its stability for roughly 80% of its initial activity after 3 h at 40 • C and showed better pH tolerance and stability after multiple uses than the free enzyme. In another study, an easy way to immobilize a recombinant organophosphorus hydrolase with His-tag obtained from Brevundimonas diminuta on organic-inorganic hybrid nanoparticles consisting of calcium phosphate nanocrystals and copper-modified bovine serum albumin was investigated.This facilitated the effective fabrication of a reusable, durable, and easily purifiable biocatalyst for breaking down methyl parathion.The immobilized enzyme showed improved stability in terms of pH and temperature compared to the enzyme in its free form.It displayed a higher residual activity value of about 60% at 60 • C compared to the free enzyme (only 20%) and showed a substantially superior tolerance to both alkaline and acidic conditions than the free enzyme.In addition, about 75% and 56% activities were shown by the immobilized enzyme after 5 and 10 times uses, respectively.Nevertheless, the immobilized biocatalyst displayed lower k cat (1767 min −1 ) and k cat /K m (5004 min −1 mmol −1 L) than the free enzyme (6362 min −1 and 11,822 min −1 mmol −1 L, respectively), probably due to adverse conformation modifications that may have affected the enzyme during the immobilization process [244]. Chen et al. [245] used the recombinant strain E. coli BL21 to produce PTE.The enzyme was mixed with CoCl 2 and MnCl 2 to form multi-metallic PTE hybrid nanoflowers, which were tested for their ability to degrade the pesticide methyl parathion and two chemical warfare agents (soman and nerve agent VX).More than 93% methyl parathion (190 µM) degradation was observed in a pump-flow reactor at a flow rate of 1 mL/min.Additionally, the k cat /K m value of the immobilized enzyme was 2.9 times higher than that of the free enzyme.The immobilized enzyme efficiently degraded 60 µM soman and 40 µM VX through hydroxyl nucleophilic attack within 60 min, releasing non-toxic products.On the other hand, it was observed that the produced nanoflowers showed better long storage and thermal stability and better pH, ionic concentration, inhibitors, and organic solvents tolerance when compared to the free enzyme, making them a good candidate for treating real OP-contaminated sites. Das et al. [246] studied the degradation of chlorpyrifos by laccase from Trametes versicolor covalently immobilized onto iron oxide nanoparticles.The obtained laccase/magnetic iron nanoparticles were used in a 30-day laboratory-scale study to eliminate chlorpyrifos from soil that had been artificially contaminated.After the experiment, it was shown that the immobilized laccase was three times more efficient in removing the pesticide when compared with the control (without the enzyme).Also, it was suggested that the presence of copper in the fabricated biocatalyst could act by both adsorption and degradation during chlorpyrifos elimination in soil.This study demonstrated the potential of using laccaseimmobilized iron nanoparticles for bioremediation of chlorpyrifos-contaminated soils. Conclusions Enzymes from micro-organisms that stoichiometrically neutralize or degrade OPs with a turnover can be isolated and purified from natural sources.Constant efforts have been made in these directions for more than 30 years.Then, recombinant enzymes can be easily produced, using simple prokaryotic expression systems (e.g., E. coli).Engineering improvement of catalytic activity toward the large spectrum of OPs is the main task for both medical and environmental uses.The spectrum of OPs can also be expanded by combining several engineered enzymes in bioscavenger "cocktails" [256].Several additional issues are mandatory for efficiency, safety, and economic reasons.Increase in enzyme conformational stability for long-term storage in solution or in dry forms and improvement of in vivo operational stability and immunotolerance are important goals for the medical uses of these enzymes.All these tasks imply genetic, chemical, and physical engineering of enzymes.The different strategies can be implemented.The research of new natural enzymes in collections of bacterial strains [257], in extreme environments [258] and identification (mining) of enzymes from genomic sequence databases is the first task.In the past ten years, this approach has been extremely fruitful.In particular, about mining, new enzyme DNA sequences from extremophilic PLL were identified, genes were synthesized and enzymes expressed in a mesophilic bacterial host, and then catalytic properties and X ray structure were determined [120,132].Thus, research of enzymes of interest by computational structure mining in PDB database is extremely promising [259].Also, because extremophile micro-organisms have a great potential, exploration of extreme biotopes is of the utmost importance.For example, novel extremozymes, PLL and PROL, have been discovered by screening halophilic, hyperthermophilic, piezophilic, radioresistant bacteria and archaeas in such extreme environments.Engineering of novel enzymes is the next task.Site-directed mutagenesis and directed evolution approaches in combination with chemical modifications and medium manipulations have been successfully used to improve the desired properties, in particular stereo-selectivity, high k cat /K m , and broad spectrum of activity of PTEs and stability [208,260,261].Humanization of microbial enzymes is another possible engineering strategy.In particular, it should be noted that a human PROL, showing sequence homologies with Alteromonas haloplanktis PROL, displays a catalytic activity against sarin and soman [262].Thus, humanization of this bacterial PROL by genetic engineering could be a way to produce safe and effective recombinant PROLs to be used as catalytic bioscavengers. Alternatively, computational re-design (molecular modeling, transition state simulations, and QM/MM approaches) of known enzymes is another emerging fruitful strategy.The development of artificial intelligence following the progress of in silico approaches is expected to lead to new mutated enzymes with higher activity against wider ranges of OPs. Thus, all implemented and integrated strategies are progressively leading to more effective enzymes with improved physical and pharmaceutical properties and allowing production at a lower cost.The cost of enzyme production is certainly the main limiting factor, and considerable efforts have to be made to reduce it.For medical applications, various formulations of catalytic bioscavengers have already been used for skin protection, decontamination, and safe prophylaxis and post-exposure treatments of OP poisoning.Multiple enzyme formulations will extend the activity spectrum of free or encapsulated enzymes in nanoparticles or in nanoreactors.Moreover, new gene therapy approaches may offer the possibility of the transitory production of humanized bacterial OP-degrading enzymes in the body.However, besides ethical issues, more research works are still needed to engineer safe gene therapy vectors that do not produce toxic viral proteins and/or induce adverse immune responses. As for applications to environmental decontamination and remediation, the sustainable and environmentally friendly approach using enzymes for OPs degradation holds immense potential for remediation efforts and reducing the environmental impact of these toxic substances.In addition, OP-degrading enzymes can be immobilized on various supports and matrices, allowing for their reuse multiple times.This decreases costs and increases the overall performance of the biocatalysts.Although OP-degrading enzymes have shown significant success in in vitro studies, their use in real bioremediation conditions needs to overcome some drawbacks, such as their unstable efficiency, their vulnerability to organic and inorganic inhibitors, and their ineffectiveness against some OPs, such as V-type nerve agents, rendering more research necessary to resolve these issues.Future research efforts will probably focus further on using engineered OP-degrading enzymes alone or encapsulated in nanoparticles to attain better degradation and stability properties.For example, significant thermostability improvement of OP-degrading enzymes could be achieved with engineering techniques such as flexible loop truncation, proline substitutions, ionic pair networks creation [263], and self-assembling amphipathic peptides fusion [264]. Figure 1 . Figure 1.Mechanism of inhibition, aging, and reactivation of cholinesterases (ChEs) by OPs.Two key residues in active center of ChEs are depicted: the catalytic serine (-Ö-H) and the catalytic histidine. Figure 1 . Figure 1.Mechanism of inhibition, aging, and reactivation of cholinesterases (ChEs) by OPs.Two key residues in active center of ChEs are depicted: the catalytic serine (-Ö-H) and the catalytic histidine. Figure 2 . Figure 2. Different enzymes involved in the degradation of OP. 3. 1 .1.Phosphotriesterases Phosphotriesterases (PTEs) are a group of enzymes that hydrolyze OPs.They are found in animals, micro-organisms, and plants.There are three different types of well- Figure 2 . Figure 2. Different enzymes involved in the degradation of OP. Figure 4 . Figure 4. Biological fate of organophosphorus compounds in poisoned organisms.Routes of penetration of OPs are absorption through the skin, eyes, and/or respiratory tract (nerve agents and pesticides), or ingestion (self-poisoning).OP molecules are distributed from the blood compartment into tissues, including depot sites, physiological targets, and sites of elimination (liver and kidneys).ChEs are the main targets (see Figure1).Reaction of OPs with secondary targets (carboxylesterases, serine-amidases, peptidases, and other proteins) may be responsible for non-cholinergic sub-lethal effects of OPs and chronic toxicity at low dose exposure.(Adapted from[112]). Figure 4 . Figure 4. Biological fate of organophosphorus compounds in poisoned organisms.Routes of penetration of OPs are absorption through the skin, eyes, and/or respiratory tract (nerve agents and pesticides), or ingestion (self-poisoning).OP molecules are distributed from the blood compartment into tissues, including depot sites, physiological targets, and sites of elimination (liver and kidneys).ChEs are the main targets (see Figure1).Reaction of OPs with secondary targets (carboxylesterases, serine-amidases, peptidases, and other proteins) may be responsible for non-cholinergic sub-lethal effects of OPs and chronic toxicity at low dose exposure.(Adapted from[112]). 34 Figure 5 . Figure 5. Nano-biotechnological strategies for implementing bacterial/archaeal enzyme-degrading OPs for medical purposes.The dose-response plot shows the prophylactic and post-exposure treatment efficacy of a PTE-nanoreactor in mice challenged with paraoxon[198,199]. Figure 5 . Figure 5. Nano-biotechnological strategies for implementing bacterial/archaeal enzyme-degrading OPs for medical purposes.The dose-response plot shows the prophylactic and post-exposure treatment efficacy of a PTE-nanoreactor in mice challenged with paraoxon[198,199]. Figure 6 . Figure 6.Nanotechnological strategies to avoid fast clearance and prolong the circulation of enzyme-loaded nanoparticles in the body. Figure 6 . Figure 6.Nanotechnological strategies to avoid fast clearance and prolong the circulation of enzymeloaded nanoparticles in the body. Table 1 . The most investigated microbial OP-degrading enzymes. Table 1 . The most investigated microbial OP-degrading enzymes. Table 2 . OP removal using enzyme immobilization on nanoparticles.
2024-07-19T15:05:47.218Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "da65ab7d1f24f0c67ce0e1362d27f78540c4f9da", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms25147822", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8893ee85244815b65059d20309368964378cd3f6", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
79600519
pes2o/s2orc
v3-fos-license
Postural alignment of patients with Chronic Obstructive Pulmonary Disease Introduction: In chronic obstructive pulmonary disease (COPD), airflow resistance impairs respiratory mechanics that may compromise postural alignment. There is a lack of studies that have investigated compromised postures and their possible associations with pulmonary function. Objectives: To compare the postural alignment of COPD patients with apparently healthy individuals; To correlate pulmonary function with postural alignment in the COPD group. Methods: 20 COPD patients and 20 apparently healthy individuals performed: anthropometry, spirometry and postural evaluation. The following postural changes were assessed: lateral head tilt (LHT), shoulder asymmetry (SA1), anterior pelvic asymmetry (APA), lateral trunk tilt (LTT), scapular asymmetry (SA2), posterior pelvic asymmetry (PPA), head protrusion (HP), shoulder protrusion (SP), anterior pelvic tilt (APT) and thoracic kyphosis (TK). Results: There was a statistically significant difference between COPD patients and apparently healthy individuals in the following variables: PPT (p= 0.021), APT (p=0.014) and TK (p=0.011). There was a correlation between pulmonary variables and postural alignment in the COPD group: Forced Volume in one second (FEV1% pred) and HP (°) (r=0.488, p=0.029), FEV1 (% pred) and APT (°) (r= -0.472, p= 0.036); Forced Vital Capacity (FVC % pred) and HP (°) (r=0.568, p=0.009); FVC (% pred) and APT (°) (r=-0.461, p=0.041). Conclusion: Postural alignment of the anterior tilt of the right * MAG: MS, e-mail: marcya-88@hotmail.com DSF: undergrad, e-mail: davisouzafrancisco@gmail.com CSM: undergrad, e-mail: medeiross.caroline@gmail.com AKVB: MS, e-mail: anakarla_vb@hotmail.com GZM: PhD, e-mail: giovana.mazo@udesc.br EP: PhD, e-mail: elaine.paulin@udesc.br Introduction Chronic obstructive pulmonary disease (COPD) is characterized by airflow limitation.It is usually progressive and associated with a chronic inflammatory response in the airways due to noxious particles or gases (1).COPD is increasingly recognized as systemic and heterogeneous, and may involve skeletal muscle dysfunction (2,3), heart diseases (4), nutritional dysfunctions (5), biochemical changes (6), osteoporosis (7), psychological disorders (8), as well as the impairment of pulmonary mechanics (9).Some changes in pulmonary mechanics include lung hyperinflation and air trapping, which are gradually identified in COPD patients.These factors may contribute to increase the anteroposterior diameter of the thorax (10,11,12), horizontalization of the ribs (11,13), as well as to compromise the postural alignment due to compensations in the scapular pelvic girdles and especially in the thoracic spine. Additionally, COPD patients usually show rectification and shortening of the diaphragm, which may trigger changes in the endothoracic fascia.This shortening may lead to pelvic anteversion and psoitic diaphragmatic hyperlordosis due to the connections of the muscle-aponeurotic connections of the diaphragm muscle with the iliopsoas muscles, the transverse abdominis muscle and the lumbar square muscle (14). Few studies have evaluated postural alignment of COPD patients (14,15).Dias et al. (15) evaluated the kinematics of the scapular girdle, cervical and thoracic spines of 19 COPD patients and 19 healthy individuals.The study showed a higher elevation of the scapula among COPD patients, compared to healthy individuals.The authors report that this change is likely to occur due to pulmonary hyperinflation that changes the position of the sternum and the scapula.However, they did not find significant differences in the regions of the cervical and thoracic spine between the studied groups. COPD according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification (1).A diagnostic record, developed by researchers linked to the Laboratory of Respiratory Physiotherapy (LAFIR) at the State University of Santa Catarina (UDESC), was used in order to identify the characteristics of the individuals participating in the study. For COPD patients, the following inclusion criteria were considered: 1) clinical stability in the last month and at the beginning of the protocol of evaluation; 2) patients who did not use supplemental oxygen; 3) inexistence of other associated respiratory or cardiovascular diseases; 4) patients without involvement in pulmonary rehabilitation programs in the 6 months prior to the start of this study; 5) patients who did not undergo recent spinal or lower limb surgeries and/or who hadn´t had fractures in the past 6 months.The exclusion criteria were as follows: 1) presence of exacerbations of the disease during the study; 2) clinical intercurrences related cardiorespiratory abnormalities during the evaluations; 3) inability to perform any of the study evaluations (lack of understanding or cooperation); and 4) patients´ with drawal during the evaluation period. The group of apparently healthy individuals included: 1) individuals with normal spirometry (FEV1/ FVC ≥ 0.7; FEV1 ≥ 80% of predicted, FVC ≥ 80% of predicted); 2) age, body mass and BMI compatible with COPD group; and 3) inexistence of associated comorbidities.Exclusion criteria were as follows: 1) inability to perform any of the study evaluations (lack of understanding or cooperation); 2) individu-als´ withdrawal during the evaluation period. Pachioni et al. (14) compared the posture of 15 COPD patients with 15 healthy individuals.The authors observed the presence of three important postural changes in COPD patients: posterior pelvic asymmetry, anterior pelvic asymmetry and increased thoracic kyphosis.The study suggests that such changes could be related to the disease itself; however, the authors did not conduct a correlational analysis between pulmonary function variables and postural alignment. With the progression of COPD, postural changes may become increasingly evident and it will be more difficult for patients to perform their daily life activities, physical exercises.Also, these changes will mainly interfere in the pulmonary function of these individuals, which is already compromised.Therefore, postural evaluation is of utmost importance in order to identify the existence of changes in the postural alignment and its relation with the pulmonary function in COPD patient.The results obtained in this current study may offer significant contributions to the relationship between posture and respiration, since there are only a few studies in the literature about the presence of postural imbalances in COPD patients and how much this may further compromise their pulmonary function.Moreover, these results may provide further information to develop more appropriate therapy programs combining respiratory strategies, which are already part of pulmonary rehabilitation programs. Thus, this study aimed: 1) to compare postural alignment of COPD patients with apparently healthy individuals; 2) to correlate pulmonary function variables with postural alignment in the COPD group. Study Population and Sample This is an analytical cross-sectional study with a quantitative approach.It was approved by the Ethics and Research Committee involving Human Subjects under the number CAEE: 08857612.2.0000.0118and all participants were previously informed about the study and signed an Informed Consent Form, as determined by Resolution 466/12 of the National Health Council. The sample consisted of 20 "apparently" healthy individuals and 20 patients with a diagnosis of In the posterior view, two postural changes were evaluated: the scapular asymmetry (SA2), evaluated by the angle formed between the inferior angles of the scapulae and the horizontal axis; and posterior pelvic asymmetry (PPA) was measured by the angle between the two posterior superior iliac spines and the horizontal axis. In the right lateral view, two postural changes were evaluated: head protrusion (HP), by the angle between the spinous process on the C7/tragus and horizontal axis; and shoulder protrusion (SP), by measuring the angle between the spinous process of C7/acromion and the horizontal axis. In the left lateral view, two postural changes were evaluated: the anterior pelvic tilt (APT), obtained by the angle between the anterior-superior/posteriorsuperior iliac spine and the horizontal axis; and thoracic kyphosis (TK), which was measured by the angle between the vertebra T3 and T12 with vertex in the most prominent vertebra. Next, the individual was placed in a static orthostatic position at a distance of 50 cm in front of a black wall and next to a plumb line marked with three Styrofoam balls 50 cm apart from each other them, to allow calibration for the photograph.His/her feet were positioned loosely on top of a black EVA foam mats to shoot the first photograph.The individual was informed, through a verbal command, to remain in a comfortable position, with the eyes fixed at a line, in a relaxed posture.Subsequently, with a white chalk, the foot outlines were drawn so that the individual could stand at the same point for the next photo shoot.After the front and left side views were photographed, the mat was rotated 180º to obtain the photographs in the posterior and right lateral views (Figure 1). Two photo cameras (Sanyo BD 200 14.1 mega pixels, DSC -W610) were used, which were placed on a tripod (97 cm high).One camera was placed in the front of the individual and the other camera at the left side.However, both cameras were positioned at a distance of 2.30m from the participant.The distance and height of the camera was adapted, as previously mentioned, to facilitate visualization of the individual.According to Mota et al. (20), the distance should be sufficient to position the entire body in the center of the image and the resolution should be sufficient to show each of the markers clearly. The photos were digitalized and analyzed with the PAS software.The analysis of the photos and Spirometry Spirometry was performed to verify the sub-jects´ pulmonary capacity, using a portable digital spirometer (EasyOne ® , ndd Medical Technologies, Zurich, Switzerland), previously calibrated according to the methods and criteria recommended by the American Thoracic Society and the European Respiratory Society (17).The following parameters were measured: forced vital capacity (FVC), forced expiratory volume in one second (FEV 1 ) and FEV 1 / FVC ratio before and 15 minutes after inhalation of bronchodilator (BD) salbutamol (400μg) in COPD patients.At least three acceptable maneuvers and two reproducible maneuvers were performed.Spirometry variables are expressed in absolute values and in percent of predicted values of normality, as determined by Pereira et al. (18).The normal lung function test criteria consist of FVC and FEV1 ≥ 80% of predicted and FEV 1 /FVC ≥ 0.7. Postural Alignment For the evaluation of the postural alignment, the Postural Assessment Software (PAS/SAPO) validated by Ferreira et al. (19) was used.Before the photographs were taken, male subjects were instructed to wear bermuda shorts, and female subjects to wear bermuda shorts and a top.Initially, anatomical points were identified through the palpatory anatomy in the following regions of the body: head, shoulders, pelvis, spine, upper limbs and lower limbs.After the points had been identified, they were marked with Styrofoam balls, with a diameter of 20 mm, fixed to the body parts with double-sided adhesive tape. In order to mark the points and define the postural changes that would be evaluated, the protocol developed by Pachioni et al. (14) was used. In the anterior view, four postural changes were evaluated: lateral head tilt (LHT) by measuring the angle formed between the glabella/mento and the horizontal axis; the shoulder asymmetry (SA1) by the angle formed between the two acromions and the horizontal axis; anterior pelvic asymmetry (APA) measured by the angle between the two anterior superior iliac spines and the horizontal axis; and finally, the lateral trunk tilt (LTT) measured by the angle between the two anterior superior iliac spines and the two acromions.pelvic tilt, the Mann-Whitney U test was used.The Pearson Correlation Coefficient was used to compare the pulmonary variables with lateral head tilt, shoulder asymmetry, scapular asymmetry, head protrusion, shoulder protrusion, anterior pelvic tilt and thoracic kyphosis.The Spearman's correlation coefficient correlated pulmonary function with anterior pelvic asymmetry, lateral trunk tilting and posterior pelvic asymmetry.The significance level for all the tests was set at an alpha of 5%. Results Participants in the study were 40 subjects of both genders (20 men and 20 women), who were divided into two groups: group 1 with 20 COPD patients, aged 65.35 (± 6.76) years and group 2 with 20 apparently healthy individuals, aged 65.55 (± 4.57) years. Considering age and anthropometric and pulmonary characteristics between the studied groups (i.e., COPD patients and apparently healthy individuals), Table 1 shows no statistically significant difference between age, body mass, height and BMI, demonstrating that both are homogenous groups for these variables.Regarding pulmonary capacity, a significant difference (p < 0.001) was observed between the groups in the predicted percentage of FEV 1 and in the FVC percent of predicted.The best results, on average, were for apparently healthy individuals. According to GOLD staging system, the patients showed severe COPD and are classified as Stage III (FEV1 = 46.25 ± 14.52% of predicted). Regarding the postural alignment of the groups with COPD patients and apparently healthy individuals, table 2 shows a statistically significant difference between the groups for posterior pelvic asymmetry (p = 0.021), anterior pelvic tilt (p = 0.014) and in the thoracic kyphosis angle (p = 0.011).Therefore, it can be observed that COPD patients have a greater posterior pelvic asymmetry, anterior pelvic tilt of the left and right pelvis, and greater thoracic kyphosis angle. When comparing men and women in the COPD group, the results showed that there was no significant difference in any of the measurements performed by the postural evaluation.Both groups of genders showed similar degrees for the variables of postural alignment (Table 3).measurements of the angles of the variables of the postural alignment were based on the coordinates of the anatomical points. Statistical Analysis The data was analyzed with the SPSS (Statistical Package for the Social Sciences) for Windows, version 20.0 (IBM Corporation, Armonk, NY, USA).The descriptive analysis as mean and standard deviation was applied to all variables.For the sample calculation, the statistical power for 20 subjects per group was calculated, a post hoc sample calculation was performed, using Student´s t test with the GPower 3.1 program.The mean values of thoracic kyphosis were determined in both groups: COPD -mean (207.13 ± 4.83); healthy -mean (202.81 ± 5.41).The calculated effect was 1.104315 and, using a margin of error of 5%, the power of the calculated sample was 80 for 20 subjects in each group. To verify the normality of the data, the Shapiro-Wilk test was applied; to compare shoulder asymmetry, head protrusion, shoulder protrusion and thoracic kyphosis between the groups, the Student´s t test was used; and to compare the lateral head tilt, anterior pelvic asymmetry, lateral trunk tilt, scapular asymmetry, posterior pelvic asymmetry and anterior This study showed a relationship between pulmonary variables and measures of postural alignment in the COPD patients group.A positive correlation was found between FEV 1 (% pred) and head protrusion (r = 0.488, p = 0.029); FVC (% pred) and head protrusion (r = 0.568; p = 0.009).However, there was a negative correlation between FEV 1 (% pred) and anterior pelvic tilt (r = -0.472;p=0.036);FVC (% pred) and anterior pelvic tilt (r = -0.461;p = 0.041).There was no correlation between the other variables related to pulmonary function with measures of postural alignment (Table 4). Discussion The current study identified two major postural changes in COPD patients compared to apparently healthy individuals: greater angle in thoracic kyphosis, in the anterior pelvic tilt and posterior pelvic asymmetry.Our are in agreement with the study conducted by Pachioni et al. (14), who also compared the postural alignment of 15 COPD patients and 15 healthy individuals and found a statistically significant difference in these same variables.Note: The values are expressed as correlation (r); * p < 0,05; ** p < 0,001; FEV1 (% pred): percent of predicted of forced expiratory volume in one second; FVC (% pred): percent of predicted of forced vital capacity; LHT: lateral head tilt; SA1: shoulder asymmetry; APA: anterior pelvic asymmetry; LTT: lateral trunk tilt; SA2: scapular asymmetry; PPA: posterior pelvic asymmetry; HP: head protrusion; SP: shoulder protrusion; APT: anterior pelvic tilt; TK: thoracic kyphosis; (°): measurements in degrees. The greater angle in the thoracic kyphosis of the COPD group may be caused by increased anteriorposterior diameter of the thorax (10,11,12) and horizontalization of the ribs (11,13), possibly due to pulmonary hyperinflation and air trapping, which is characteristic of COPD (21). In COPD patients, the disease itself and other factors, such as, aging, may be related to the increase in thoracic kyphosis.Usually, thoracic kyphosis undergoes changes with aging, as it is often related to aspects such as, osteoporosis, obesity, sedentary lifestyle and muscular weakness (22,23,24,25), which are commonly observed in elderly individuals. However, in this study, there was no statistically significant difference in mean age between the evaluated groups, which reinforces the hypothesis that this postural change may be related to the progress and the systematic changes caused by COPD and not only due to the aging process. Another change found in COPD patients was a greater angle in the anteriorization of the tilt and greater posterior pelvic asymmetry, on the right and left sides, when compared with apparently healthy individuals, which was also reported by Pachioni et al. (14).Thus, this change shows a possible relationship between this postural imbalance and the progress of the disease.Considering that participants of this study are elderly subjects and that body posture may change over the years (22), an increase in anterior pelvic tilt is likely to be caused by the aging process. However, the mean age between both groups is very similar and this postural change was more relevantly identified in COPD patients.Consequently, the cause of this change should not be attributed solely to aging, but also to physiopahtological factors related to the disease as these patients often show respiratory distress that potentiate the recruitment of accessory muscles and thoracic cavity muscles (26), triggering the apical respiratory pattern.This respiratory pattern elevates the muscle action potentials, such as the sternocleidomastoid, resulting in shortening, loss of flexibility and changes in the position of the head and compensations in the scapular girdle, pelvic spine and thoracic spine (27,28). In order to verify if these changes in postural alignment were related to pulmonary function, a correlation between these variables was performed.The results showed that there is a positive correlation of FEV1 (% pred) and FVC (% pred) with the head protrusion angle and a moderate negative correlation with the anterior pelvic tilt.However, no correlation was found between the other pulmonary variables. According to the results, postural imbalances found in the head position of COPD patients may be related to the impairment of respiratory function, as well as to the compensatory postures adopted by the patient during respiratory attacks.According to Brech et al. (29), obstruction or narrowing of pharynx, airspace has been associated anteriorization and extension of the head position in order to rectify the path for airflow passage and facilitate the entry of air to the lower airways. Due to these factors, COPD patients adopt postures that may facilitate the action of respiratory muscles.This is extremely worrying.According to Okuro et al. (30), the anteriorization of the head increases the activity of the sternocleidomastoid muscle and causes elevation of the thoracic cavity.Consequently, the thoracoabdominal mobility will be decreased and the ventilatory efficiency promoted by the diaphragm will be compromised.This mechanical disadvantage intensifies the inspiratory effort and generates a vicious circle of muscular tension, postural alteration and increased respiratory work.Thus, it is observed that the changes resulting from the disease process can trigger the postural imbalances that were found in these patients.However, it is difficult to quantify these imbalances because the literature does not offer reference values for postural alignment for elderly people.For this reason, this study was careful to compare the postural alignment of COPD patients and apparently healthy individuals within the same age group. To further analyze the postural imbalances in COPD patients, a comparative analysis was carried out between men and women.The results showed similarities in the degrees of the angles of the postural alignment between the groups.These results may be due to the similarity of age of the sample studied as both groups showed typical natural aging, which causes changes and gradual reductions in the capabilities of the various systems of the human body. There is a lack of studies comparing postural alignment between elderly women and men.However, some studies have shown that postural changes in the region of the thoracic curvature are generally more pronounced in women due to hormonal factors and muscle weakness of the spinal extenders (31,32). A limitation of this study is the evaluation of thoracic kyphosis using the PAS method because the visualization of the markers may be difficult in the lateral view, which compromises accurate measurement of the individual´s thoracic kyphosis.The literature has already reported that the PAS software is not the best tool to evaluate thoracic kyphosis.It proposes other instruments such as the Cobb index (gold standard) and measurement using the flexicurve ruler (33).However, the main objective of this study was to assess not only thoracic kyphosis, but also the postural alignment as a whole. To date, the literature has not presented values for postural alignment for this population.Therefore, it was not possible to establish a comparison between the values found in this study with values considered as "normal".For this reason, we compared the values found in COPD group with those in healthy individuals.However, it is believed that the results of the present study may positively contribute as a basis for further studies aiming to establish these reference values and to investigate the major changes that may occur in postural alignment with advancing age. Conclusion COPD patients assessed by the Postural Assessment Software (PAS) showed changes in the postural alignment, namely, greater angle in the thoracic kyphosis, in the anteriorization of the pelvic tilt and in the posterior pelvic asymmetry when compared with apparently healthy individuals. In the COPD group, there was a relationship between pulmonary function and some variables of the postural alignment.However, it cannot be stated that the physiopathological factors of the disease are able to influence postural alignment or that posture can further compromise pulmonary function.Thus, the lack of published studies on postural alignment associated with the compromised respiratory function of COPD patients reinforces the need for further studies. Figure 1 - Figure 1 -Evaluation of the postural alignment using PAS.Note: Produced by the author. Table 1 - Characteristics of the studied groups (n = 40) Table 3 - Comparison of postural alignment between men and women in the COPD group (n = 20) Table 4 - Correlation between pulmonary function and measures of postural alignment of COPD group (n = 20)
2019-02-18T14:16:41.987Z
2017-09-29T00:00:00.000
{ "year": 2017, "sha1": "b780b33608cb1f028ac2c97bea211a0498aa9143", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/fm/v30n3/1980-5918-fm-30-03-00549.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b780b33608cb1f028ac2c97bea211a0498aa9143", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233483903
pes2o/s2orc
v3-fos-license
Human gene polymorphisms and their possible impact on the clinical outcome of SARS-CoV-2 infection The SARS-CoV-2 pandemic has become one of the most serious health concerns globally. Although multiple vaccines have recently been approved for the prevention of coronavirus disease 2019 (COVID-19), an effective treatment is still lacking. Our knowledge of the pathogenicity of this virus is still incomplete. Studies have revealed that viral factors such as the viral load, duration of exposure to the virus, and viral mutations are important variables in COVID-19 outcome. Furthermore, host factors, including age, health condition, co-morbidities, and genetic background, might also be involved in clinical manifestations and infection outcome. This review focuses on the importance of variations in the host genetic background and pathogenesis of SARS-CoV-2. We will discuss the significance of polymorphisms in the ACE-2, TMPRSS2, vitamin D receptor, vitamin D binding protein, CD147, glucose‐regulated protein 78 kDa, dipeptidyl peptidase-4 (DPP4), neuropilin-1, heme oxygenase, apolipoprotein L1, vitamin K epoxide reductase complex 1 (VKORC1), and immune system genes for the clinical outcome of COVID-19. Background In March 2020, the World Health Organization (WHO) declared a pandemic caused by an emerging coronavirus named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The pathogen rapidly surged across the globe, causing havoc on all continents and severely destabilizing healthcare systems. An infection with this virus can cause a variety of mild to severe symptoms, which are collectively named coronavirus disease-2019 (COVID-19) [1]. SARS-CoV-2 is a member of the genus Betacoronavirus, family Coronaviridae. This enveloped virus has a positive single-stranded RNA genome with a length of 29.8 to 29.9 kb, making it the largest amongst all human RNA viruses [2][3][4]. SARS-CoV-2 is primarily transmitted through respiratory droplets and contact routes [2,5]. With unprecedented speed, multiple vaccines have recently been approved, and many others are currently under development to prevent COVID-19. However, with regard to antiviral drugs, specific treatments for SARS-CoV-2 are not yet available. The course of disease can differ greatly among individuals, ranging from an asymptomatic infection to mild or severe disease and death. The mortality rate for SARS-CoV-2 is still under debate and is affected by multiple variables [2,[5][6][7]. From the start of the COVID-19 pandemic, there have been numerous publications about the pathogenesis of the virus, but much still remains to be elucidated. For instance, it is still not clear why some people remain asymptomatic while others develop severe disease. Furthermore, viral respiratory infections are usually more serious in children than in adults, but for SARS-CoV-2 infection this seems to be reversed. Answering these questions and finding the factors that affect the virulence of SARS-CoV-2 will contribute to the development of suitable treatment strategies and better infection control. Age is an important factor in the outcome of viral respiratory infections. During the Spanish flu pandemic in 1918, the mean death rate was high in people aged younger than five years, 20-40 years old, and older than 65 years [8]. Similarly, at the start of the H1N1 influenza pandemic in 2009, severe pneumonia was observed in 5-to 59-year-old individuals [9]. With other coronaviruses, such as SARS-CoV and human cronavirus-NL63 (HCoV-NL63), it has been observed that children are relatively resistant to infection, similar to the current observations for SARS-CoV-2 [10,11]. It has been demonstrated that viral factors such as the number of viral particles in the inoculum, duration of exposure to the virus, and mutations in the virus genome can influence the severity and outcome of the disease [12]. Similarly, host factors such as health condition, age, gender, smoking, immune status, diabetes, hypertension, cardiovascular disease, chronic respiratory disease, cancer, and, more importantly, the genetic background, might determine the clinical manifestations and outcome of infection [13][14][15][16][17]. Based on the data, it appears that the genetic background of the host influences the severity of the infection and disease outcome [18,19]. In this review, we provide a comprehensive overview of the current data regarding host gene polymorphisms that might be associated with the pathogenesis and outcome of SARS-CoV-2 infection. Angiotensin-converting enzyme 1 and angiotensin-converting enzyme 2 polymorphisms The human ACE or ACE-1 gene is located on chromosome 17q23.3 and is made up of 26 exons. It contains some polymorphisms including insertions (I) or deletions (D) as well as a repetitive sequence of 287 base pairs of Alu sequence in intron 16 [20]. It has been reported that polymorphisms in this gene are associated with SARS-CoV infection outcome [21]. The serum level of ACE in people with the DD genotype is almost twice that of genotype II [22]. The D allele might affect renin-angiotensin pathway activity by increasing the serum or local level of ACE, leading to damage of the vascular endothelium and lung epithelium [23]. Studies have shown that some ACE polymorphisms are associated with Alzheimer's disease (the II and ID ACE genotypes), cardiovascular disease (DD), hepatitis C virus (HCV) infection (DD), and cancers (ACE A240T polymorphism) [24][25][26][27][28][29] (Table 1). Itoyama et al. found that the frequency of the D allele in SARS patients (with and without hypoxemia) was significantly higher than in healthy controls [21]. Also, they suggested that ACE-1 insertions/ deletions (I/D) could probably accelerate the development of pneumonia in SARS-CoV infection [21]. Another study in Caucasian individuals demonstrated that the presence of the D allele was accompanied by an increased rate of acute respiratory distress syndrome (ARDS) [30]. Yamamoto et al. reported that the frequency of SARS-CoV-2 infections and mortality strongly correlated with the ACE-1 I/D genotype, which might serve as a predictive marker for the severity of COVID-19 [31]. On the other hand, Chan et al. reported that in the Chinese population, the ACE I/D polymorphism was not associated with SARS-CoV infection [32] ( Table 2). The reason for this discrepancy in the Chinese population is unknown, but it might be related to the lower frequency of the DD genotype and the D allele in the Chinese population than in people with a Caucasian background [30,33]. Angiotensin-converting enzyme 2 (ACE-2) is a close homolog of ACE (or ACE-1), exhibiting 61% amino acid similarity and 40% sequence identity. ACE-2 acts as a carboxyl peptidase [34] and has been recognized as the main entry receptor for both SARS-CoV and SARS-CoV-2 through interaction with the spike (S) protein [35]. The ACE-2 gene is located on chromosome Xp22.2 and contains 18 exons [36]. This enzyme is anchored in the cell membranes as a type 1 integral membrane glycoprotein with a high expression level in lung, heart, artery, intestinal, and kidney tissues [37]. The primary role of ACE-2 is converting angiotensin II to angiotensin 1-7, but it also converts angiotensin I to angiotensin 1-9 [34]. Aberrant expression of ACE-2 has been associated with atherosclerosis, hypertension, heart failure, chronic kidney disease, and increased vascular permeability, facilitating respiratory system edema [38][39][40][41][42][43]. Entry of SARS-CoV-2 into target cells is facilitated by the S protein, which consists of two subunits, S1 and S2. The S1 subunit contains the receptor-binding domain (RBD), which interacts with the peptidase domain (PD) of the ACE-2 protein, and this is a critical step for entry of the virus into the host cell [44]. Therefore, genetic variations in the ACE-2 sequence might alter the molecular interaction of the RBD and PD domains, which not only changes the host susceptibility to the virus but also influences the severity of the disease and the outcome of infection. A recent investigation indicated that genetic variations and expression patterns of ACE-2 might reduce susceptibility to SARS-CoV-2 infection. Given that ACE-2 is located on sex chromosome X, it could be related to the observed gender discrepancies in disease outcome. Srivastava et al. reported a significant positive correlation between ACE2 rs2285666 polymorphism and a lower frequency of infection and case-fatality rate in SARS-CoV-2 infection in the Indian population [45]. Moreover, it has been reported that three SNPs -K26R (rs4646116), M82I (rs267606406) and E329G (rs143936283) -are associated with higher binding affinity for the RBD domain of the S protein compared to wild-type ACE-2. This might result in increased susceptibility to SARS-CoV-2 infection [46]. In contrast, the I21T (rs1244687367), E37K (rs146676783) and D355N (rs961360700) SNPs cause a significantly lower binding affinity and could decrease the susceptibility to infection [46]. Moreover, the hotspot N720D variant in the C-terminal collectrin-like domain of ACE-2 affects the ACE-2-TM-PRSS2 complex and creates a favorable site for TMPRSS2 binding and cleavage, thus facilitating binding to the S protein and potentially promoting virus entry into the cell [47]. Molecular docking simulations have revealed that six ACE2 missense variants -I21T, A25T, K26R, E75G, T55A and E37K -increase binding affinity, while 11 variants -I21V, E35K, K26E, T27A, S43R, Y50F, N51D, N58H, K68E, E23K and M82I -decrease the affinity of ACE-2 for the RBD of the S protein [48]. Hashizume et al. reported that ACE-2 SNPs have a limited impact on the ACE-2-dependent cell entry of SARS-CoV-2 [49]. Furthermore, a large cohort study on an Italian population demonstrated no significant relationship between ACE-2 polymorphisms and the severity of COVID-19 [15] (Table 2). Both SARS-CoV and human coronavirus NL-63 replication cause a reduction in the expression of the ACE-2 protein, which has been demonstrated to induce more-severe complications in SARS-CoV infection [50]. This reduction is caused by the inhibitory effect of the S protein on ACE-2 expression in infected cells and might result in severe acute lung failure [50]. Hence, considering the protective function of ACE-2 in the tissues, SARS-CoV-2 infections might induce an imbalance in Ang II/Ang1-7 and consequently result in inflammation and hypoxia [51]. Another report has indicated that an increase in the concentration and/or expression of ACE-2 receptors in pediatric lung pneumocytes probably provides protection against severe clinical manifestations of SARS-CoV-2 infection [52]. It has been shown that the injection of the SARS-CoV S protein into mice leads to pulmonary failure. This effect was diminished by blocking the renin-angiotensin pathway [53]. Angiotensin II type I receptor (AT1R) is an important receptor that mediates vascular permeability, and its activation promotes severe acute pulmonary damage [54]. A study performed in mice demonstrated that the inhibition of AT1R reduces severe lung damage as well as pulmonary edema [53]. In addition to its involvement in coronavirus infection, ACE-2 has also been associated with other viral respiratory infections. For instance, Gu et al. reported that ACE-2 plays a considerable role in pulmonary damage related to respiratory syncytial virus (RSV) infection, which might be related to the high level of angiotensin II in the plasma [55]. Increased levels of angiotensin II could be the result of decreased ACE-2 expression and could lead to severe lung damage by mediating AT1R during RSV infection [55]. Moreover, an association was observed between influenza virus infection and ACE-2 expression levels, which was different with various strains of influenza virus [56,57]. Although some studies have focused on studying pulmonary pathology to evaluate the impact of these viruses, the molecular mechanisms by which one is predisposed to pulmonary damage are not completely explained by the effects of infection on the renin-angiotensin system (RAS) and ACE-2. Transmembrane protease serine 2 polymorphism The transmembrane protease serine 2 (TMPRSS2) gene, with a length of 43.59 kbp, is located on chromosome 21q22.3 and consists of 14 exons [58]. It encodes a transcript that is processed by alternative splicing to provide two mRNA variants of 3.25 and 3.21 kb. The TMPRSS2 protein (492 amino acids) is a type II transmembrane serine protease that is expressed on the surface of different tissues/cells, including the small intestine, prostate, colon, salivary gland, and stomach [58]. TMPRSS2 upregulation has been demonstrated in some cancerous cells, and it promotes metastasis by proteolytic activation of hepatocyte growth factor (HGF), a pathway that is targeted by specific drugs [59]. Some polymorphisms in TMPRSS2 have been reported to be genetic risk factors for specific types of cancers and viral infections [15,60]. For instance, the TMPRSS2 M160V polymorphism increases the susceptibility to prostate cancer in Japanese men [60]. Recently, it was reported that polymorphisms in the TMPRSS2 gene might be involved in susceptibility to SARS-CoV-2 infection and COVID-19 outcome [15]. Proteolytic cleavage of the S protein by TMPRSS2 at the S1/S2 boundary triggers S2-mediated fusion of the viral envelope and endosome membrane, which is a crucial step for the release of the ribonucleoprotein into the cytoplasm [35]. Expression of the TMPRSS2 gene is Janssen et al. 2020 [217] affected by androgen and estrogen hormones, which might partially explain the observed gender differences in disease severity [53,57,61]. A study in Italy demonstrated that some SNPs, including rs2070788, rs9974589, and rs7364083, were associated with a higher expression level of TMPRSS2 and played a considerable role in determining the severity of COVID-19 [15]. Moreover, four other polymorphisms -rs77675406, rs713400, rs112657409, and rs11910678 of TMPRSS2 -affect the expression of the TMPRSS2 gene [62]. Another study showed that four variants of TMPRSS2 -rs2070788, rs383510, rs464397 and rs469390 -which affect the expression of TMPRSS2 in lung tissue, had a higher frequency in European and American populations than in Asian populations. These observations might explain the relatively higher susceptibility of European and American populations to SARS-CoV-2 infection [63] (Table 1). Furthermore, a recent report by Fuentes et al. demonstrated that synonymous variants of rs61735794 and rs61735792 were significantly associated with SARS-CoV-2 infection outcome [64]. Variations in the TMPRSS2 gene have also been associated with the outcome of other viral infections. For instance, Cheng et al. reported that rs2070788 and rs383510 polymorphisms were associated with higher susceptibility to H1N1 and H7N9 infection. Based on these findings, the authors suggest that people carrying these polymorphisms might have a higher risk of progression to severe disease [65]. Immune system gene polymorphism The relationships between polymorphisms of the immune genes and the outcome of viral infections have always been a matter of concern. Considering the pivotal role of these genes in viral clearance and immunopathogenesis, polymorphisms in these regions are likely to affect the outcome of an uncharacterized disease like COVID-19 (Tables 1, 2). Cytokines Cytokines are small proteins (~5-20 kDa) that are important for cell signaling and include interleukins, chemokines, lymphokines, interferons, and tumour necrosis factors. In severe cases of SARS-CoV-2 infection, high concentrations of innate inflammatory cytokines, including type I interferons (IFNs), tumor necrosis factor α (TNF-α), IL-6, IL-1β, and some chemokines, including CCL-2, CCL-3, CCL-5, and IP-10, are secreted by epithelial and immune cells [66]. This uncontrolled and excessive release of pro-inflammatory cytokines, i.e., cytokine storm, has been observed in patients infected with influenza virus, SARS-CoV, and MERS-CoV [66,67]. A cytokine storm is characterized by a strong proliferation and hyperactivation of T lymphocytes, overexpression of more than 100 pro-inflammatory genes, and massive endothelial and epithelial cell apoptosis of the lung, which results in alveolar edema, hypoxia and ARDS, and finally, death [66,68]. The significant role of this aberrant immune response in severe COVID-19 has inspired the search for antibodies that block pro-inflammatory cytokines such as IL-6 and IL-17 as well as monocyte recruitment elements [68]. Additional immune gene variations have been associated with susceptibility to SARS-related pathogenesis. For instance, the IFN-γ +874A allele variant is considered a risk factor for susceptibility to SARS [69]. Interleukin-6 (IL-6, a 21-kDa) glycoprotein polymorphism has been associated with viral infections such as influenza virus, RSV, hepatitis B virus (HBV), and HCV [70]. For example, the IL6-174 C/C genotype (rs1800795) is associated with severe RSV infection [71]. Although IL-6 plays a significant role in the initiation of the cytokine storm in SARS-CoV-2-infected patients, little information is available concerning IL-6 polymorphisms and the pathogenesis of idiopathic pulmonary fibrosis in SARS-CoV-2 infection. A meta-analysis showed that the IL6 174C allele is associated with elevated cytokine production and the outcome of pneumonia [72]. Based on the regulatory function of IL-6 for CD4 T cell fate, it can be hypothesized that studying polymorphisms of the IL-6 genes may provide further insights into COVID-19 pathogenesis [73]. In fact, IL6 polymorphism is considered a valid indicator for the severity or pathology associated with SARS-CoV-2 infection. Moreover, Panda et al. reported that the CCR5 Δ32 polymorphism might influence susceptibility to SARS-CoV-2 infection [74]. Also, IL4 gene polymorphism (SNP: rs2070874; risk allele: T) might influence the outcome of SARS-CoV-1 infection [75]. The IFNAR2 gene is located on chromosome 21q22.11 and encodes the IFN-α/β receptor beta chain [76]. Data analysis has shown that polymorphism at the rs2236757 locus of the IFNAR2 gene is associated with the outcome of COVID-19 disease [77]. IFNL4 is another gene that might be involved in SARS-CoV-2 infection. IFNL4 is located on chromosome 19q13.2 and is involved in the defense against viral infections, such as HCV, RSV and influenza virus [78]. Amodio et al. reported that polymorphisms at the rs368234815TT/∆G locus of the IFNL4 gene are associated with a higher SARS-CoV-2 viral load [79]. 3 The OAS-RNase L axis is rapidly activated following cellular infection with various RNA viruses, e.g., flaviviruses, alphaviruses, picornaviruses, and coronaviruses [85,87]. Moreover, the variant of rs10735079 in a gene cluster encoding antiviral restriction enzyme activators (OAS1, OAS2, OAS3) is associated with SARS-CoV-2 infection outcome [77]. Human Mx1 is another IFN-type-1-induced gene that is located on chromosome 21q22.3, and the encoded protein targets viral nucleoproteins [88,89]. Regarding the role of these genes in innate immunity against viral infections, some studies have suggested that SNPs in the OAS1 gene and MxA promoter might be associated with a higher susceptibility to more-severe SARS infection [90]. In this regard, He et al. reported that SNPs in the 3'-UTR region of the OAS1 gene and the MxA promoter were associated with susceptibility to SARS in the Han population of China [90]. The results of their study revealed that the frequency of the AG and GG genotypes of the OAS1 gene was higher in the control group than in SARS patients. Therefore, they suggested that the G allele might have a protective effect against SARS-CoV infection [90]. Also, studies have revealed that polymorphisms at the -123 (C/A) and -88 (G7T) loci of the Mx1 gene affect the levels of protein expression, thereby influencing disease outcomes in patients infected with HBV, HCV, and enterovirus 71, and SARS-CoV [90][91][92][93][94][95][96][97][98]. CD14 CD14 is a transmembrane glycoprotein that is located on chromosome 5q31.3 and expressed on cells of the monocytemacrophage lineage and neutrophils [99]. CD14 is involved in a variety of biological activities, including cell differentiation and host-pathogen interactions, and it is a key molecule in the activation of innate immune cells [99]. It has been reported that the frequency of CD14-159CC polymorphism (rs2569190) is significantly higher among patients suffering from severe SARS, indicating its importance in determining the disease outcome [99]. On the other hand, it has also been reported that IL-10 and TNF-α polymorphisms are not associated with SARS sensitivity [69]. In a study carried out by Tu et al., it was found that the CCL2G-2518A (rs1024611) polymorphism and variations in codon 54 in MBL (rs1800450) are significantly associated with the risk of SARS-CoV infection [100]. CD209 The CD209 gene is located on chromosome 19p13.2 and encodes a critical dendritic-cell surface receptor named dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin (DC-SIGN) [101,102]. In innate immunity and antiviral defense, DC-SIGN has key functions, including DC migration, antigen uptake, and T-cell priming [103]. DC-SIGN was first discovered as an attachment factor for HIV that binds to the viral envelope glycoprotein and increases the efficiency of infection [104]. Subsequently, DC-SIGN and DC-SIGNR (or L-SIGN) were shown to enhance Ebola virus and SARS-CoV infection [105]. Studies have suggested that polymorphisms in the DC-SIGN gene might be involved in susceptibility/resistance to dengue virus, tick-borne encephalitis virus, cytomegalovirus, and SARS-CoV-2 infection [106,107,110]. Supporting this hypothesis, Iyer et al. recently reported that polymorphisms at rs2287886 and rs8105483 are correlated with protection, and those at rs11881682, rs8105572, rs7252229, rs11465384, rs7248637, and rs1146541 are correlated with risk of/susceptibility to symptomatic COVID-19 disease [106] (Table 2). In addition, they revealed that the G allele in rs10518270 and rs2335525 SNPs is a risk allele in SARS-CoV-1 infection [106]. Mucin 5B The mucin 5B (MUC5B) gene is located on chromosome 11p15.5 and encodes MUC5B, which plays a role in the viscoelastic and lubricating properties of the mucosa [108]. The MUC5B gene is upregulated in some human diseases related to pulmonary fibrosis and end-stage lung disease [109]. Many symptoms related to MUC5B upregulation in lung disease resemble COVID-19 disease [106]. It has been suggested that the SNPs in the MUC5B gene might be associated with human diseases such as pulmonary fibrosis and COVID-19 disease [106,[110][111][112]. Iyer et al. reported that polymorphisms at rs2672794, rs56235854, rs7115457, rs2735727, rs12417955, and rs56367042 might be associated with susceptibility to COVID-19 disease and that those at rs2735733, rs2249073, rs2857476 might be associated with comorbidities [106]. IFN-induced transmembrane protein 3 Another IFN-inducible gene is the IFN-induced transmembrane protein 3 (IFITM3) gene, which is located on chromosome 11p15.5 [113]. This gene encodes a membrane protein that inhibits viral fusion with cholesterol-depleted endosomes [113]. It has been shown that IFITM3 is active against viral infections, including influenza virus, SARS-CoV, dengue virus, Ebola virus, and HIV-1 [114,115]. Functional polymorphisms in this gene have been studied in several infections. Iyer et al. reported that polymorphisms at rs34481144 might have a protective role, whereas those at rs7948108, rs12252, rs4804800, rs4804803, rs6598045, and rs3888188 might increase susceptibility to SARS-CoV-2 infection [106]. Furthermore, Gómez et al. revealed that the rs12252 C variant of the IFITM3 gene might be associated with COVID-19 disease outcome [116]. IFN induced with helicase C domain 1 The IFN induced with helicase C domain 1 (IFIH1) gene is located on chromosome 2q24.2, is induced by IFN type I, and encodes MDA5, an intracellular sensor of viral RNA [117]. It has been reported that the lower frequency of the T allele at the locus rs1990760 of IFIH1 in the African-American population might be associated with COVID-19 infection outcome due to decreased expression of IFN-β [118]. Formyl peptide receptor 1 (FPR1), a pattern-recognition receptor, is involved in the induction of innate immune responses against bacterial infections [119]. Although it has been reported recently that FPR1 expression is involved in lung inflammation and fibrosis, a genetic investigation showed no association between polymorphisms at the rs867228 and rs5030880 loci of the FPR1 gene and the severity of COVID-19 disease [120] ( Table 2). Dipeptidyl peptidase 9 Dipeptidyl peptidase 9 (DPP9) is a Dipeptidyl peptidase 9 (DPP9) is a member of the S9B family, and its gene is located on chromosome 19p13.3 [77]. DPP9 is a serine protease that plays an important role in antigen presentation, activation of inflammasomes, and cleavage of important elements of the immune system such as chemokines (CXCL10, CXCL11, and CXCL12) [121][122][123]. It has been reported that polymorphisms at the locus rs2109069 of the DPP gene might be associated with COVID-19 outcome. For instance, variations at this location were associated with idiopathic pulmonary fibrosis in COVID-19 patients [77,124]. Tyrosine kinase 2 Tyrosine kinase 2 (Tyk2), a member of the Janus kinase (JAK) family, is associated with the cytoplasmic domain of type I and II cytokine receptors [77]. The Tyk2 gene is located on chromosome 19p13.2. It has been reported that polymorphisms at rs2109069 near the Tyk2 gene might be associated with COVID-19 outcome [77]. Human leukocyte antigen Human leukocyte antigens (HLAs) are encoded by the major histocompatibility complex (MHC) genes in humans and are responsible for regulation of the immune system. The HLA gene complex is located on chromosome 6p21 [125]. Previous investigations have shown that various polymorphisms of HLA, such as HLA-DR B1*1202, HLA-B*4601, HLA-B*0703, and HLA-Cw*0801 might predispose carriers to more-severe SARS-CoV infection [126,127]. In contrast, the HLA-A*0201, HLA-Cw1502, and HLA-DR0301 alleles are correlated with protection from SARS-CoV infection [128], while HLA-DQB1*02:0 and HLA-DRB1*11:01 are associated with a higher risk of MERS-CoV infection [129]. Amoroso et al. reported that HLA-DRB1*08 was more frequent in COVID-19 patients and correlated with an increased mortality rate [130]. Other immune system genes Alpha-2-HS-glycoprotein (AHSG, alpha-2-Heremans-Schmid glycoprotein or fetuin-A) is a protein that is encoded on chromosome 3q27.3 [131]. This protein is synthesized by hepatocytes and adipocytes and has several functions in brain development, endocytosis, and the formation of bone tissue [132]. Like carrier proteins (e.g., albumin) it is present in the serum, and SNPs have been associated with serum fetuin-A levels [133]. Fetuin-A can increase insulin resistance and inflammation, and it is essential for the deactivation of macrophages by modulating endogenous cations [134]. Low serum levels of AHSG have been associated with uncontrolled production of proinflammatory cytokines [135,136]. Furthermore, polymorphisms in the AHSG (SNP rs2248690; risk allele, T) gene might affect the outcome of SARS-CoV-1 infection [137]. So far, polymorphisms in this gene have not been studied in COVID-19 patients. The Toll-like receptor 3 (TLR3) gene is located on chromosome 4q35.1. TLR3 is a member of the TLR family of pattern recognition receptors and has an important function in sensing and activation of the innate immune system. An in silico analysis by Teimouri et al. showed that polymorphisms at rs3775290 and rs3775291 enhanced recognition of SARS-CoV-2 dsRNA by TLR3, whereas those at rs73873710 decreased its efficiency [140]. Vitamin D binding protein Vitamin D binding protein (DBP) is a multifunctional protein that is involved in various clinical conditions by regulating vitamin D metabolite levels [141]. The DBP gene is located on chromosome 4q11-q13 and is predominantly expressed in liver tissue [142]. Some factors such as estrogen, glucocorticoid, and inflammatory cytokines modulate DBP expression levels. Consisting of 458 amino acids, DBP has been identified as the most polymorphic protein [142]. Allele variations have a substantial impact on its biological functions (e.g., the alleles Gc1s (rs7041 locus) and Gc2 (rs4588 locus) increase the affinity of DBP for vitamin D and are associated with lower free vitamin D levels [141]. In HCV infection, the G allele at the rs7041 locus has been reported to be associated with the expression of a high-affinity receptor, which increases the risk of infection [143]. People carrying the GG genotype at the rs4588 locus of the Gc2 polymorphic region have lower levels of 25(OH) D compared to individuals carrying the AA genotype following vitamin D supplementation [144]. Batur et al. showed that DBP variations such as the GT genotype at rs7041 are significantly correlated with a higher frequency of COVID-19-related deaths, while the TT genotype showed a significant negative correlation with mortality [141]. In addition, the polymorphism at the rs4588 locus had no significant impact on COVID-19 severity [141]. These findings indicate that variations in mortality rate may be explained by DBP polymorphisms, which affect vitamin D metabolism [141]. Vitamin D receptor The vitamin D receptor (VDR) gene is located in the region q13.11 of chromosome 12 [145]. VDR is a nuclear receptor that functions as a transcription factor when activated by its ligand [146]. Most of the VDR activity is controlled by the interaction with the ligand 1,25(OH)2D, but some functions are vitamin-D-independent. [147,148]. VDR belongs to the steroid receptor family, which includes retinoic acid, thyroid hormone, sex hormones, and adrenal steroid receptors, which have a broad distribution among different cells [149,150]. Vitamin D signaling is involved in calcium and bone homeostasis, skin biology, immune health, oral health, cancers, and cardiovascular diseases [148]. Many cells of the innate and adaptive immune system express VDR. Some of these cells express CYP27B1 for producing the biologically active form of vitamin D [148]. Data from experiments using animal models have demonstrated that vitamin D/VDR signaling modulates autoimmune T cell responses and immuneinflammatory reactions, but the physiological activity in humans is not completely understood [151]. Therefore, its role in the pathogenesis and outcome of immune-inflammatory diseases remains to be elucidated [148,152]. Based on a meta-analysis by Laplana et al., a polymorphism at locus rs2228570 was associated with viral infections. The TT genotype and T allele were reported to be risk factors for infections with enveloped viruses, including RSV [153]. There are numerous findings indicating that vitamin D supplementation can reduce the risk of severe infection and death related to influenza and COVID-19. However, none of the studies have reported the importance of vitamin D receptor polymorphisms in COVID-19 disease outcome [154]. When examining the antimicrobial role of vitamin D in different populations, we need to take VDR polymorphisms into account. Glucose-regulated protein 78 kDa Glucose-regulated protein (GRP78), or heat shock protein family A (HSPA) member 5, belongs to the HSP70 family and is found on the membrane of the endoplasmic reticulum (ER). The gene encoding this protein is located on chromosome 9q33.3 [155,156]. GRP78 plays an accessory role in many stages of the viral life cycle, including viral attachment and entry (facilitating or alternative factor), protein production (proper folding and processing), release (assembly and maturation), and re-infectivity (released together with mature virions and acting as accessory infectivity factor) [157]. Bioinformatic modeling has predicted favorable binding between GRP78 and the III (C391-C525) and IV (C480-C488) regions of the SARS-CoV-2 S protein [158]. In cells harboring the -415A/-180G allele, expression of HSPA5 was significantly increased compared to cells harboring the -415 G/-180 del allele following ER stress [159]. GRP78 expression and its polymorphisms may be associated with SARS-CoV-2 infection and mortality rates, but their role in COVID-19 is still poorly understood. CD147 polymorphisms CD147 is a cell-surface glycoprotein belonging to the immunoglobulin superfamily that plays a role in intercellular recognition [160]. The CD147 gene encompasses a stretch of 7500 bp on chromosome 19p13.3 and encodes a protein from eight exons [161,162]. CD147 is also known as extracellular matrix metalloproteinase inducer (EMM-PRIN) or basigin and is expressed by a variety of cell types, including endothelial cells, epithelial cells, and lymphocytes [163][164][165][166]. It has been reported that CD147 regulates proliferation, differentiation, migration, metastasis, and apoptosis of tumor cells, particularly in hypoxic situations [167,168]. Therefore, the role of CDl47 in the progression and metastasis of tumors has been studied [169]. CD147 has recently been identified as a marker of inflammation [170]. Moreover, some studies have shown that CD147 is an important molecule in proteolysis and inflammation [171]. CD147 has various ligands, e.g., integrins, cyclophilin, and Plasmodium falciparum reticulocyte binding-like homologue 5 [168]. The role of CD147 in infections by viruses such as Kaposi's sarcoma-associated herpesvirus (KSHV), HBV, HCV, and HIV has been studied extensively, and these studies have supported the importance of CD147 in viral pathogenesis and tumorigenesis [168]. Interactions of CD147 with cyclophilin A during HIV infection accelerate its uptake into cells. A similar mechanism has been reported for SARS coronavirus infection [172,173]. Furthermore, CD147 on epithelial cells acts as a receptor for measles virus [174]. Studies have reported that the CD147 gene contains a number of SNPs in coding and regulatory regions, including rs2283574, rs6757, rs8637, rs4919862, rs6758, rs8259, rs4919859, and rs28915400. These regions might be involved in numerous diseases and disorders [175,176]. For instance, it has been reported that polymorphisms in the CD147 gene might effectively contribute to the initiation and progression of acute coronary syndrome and skin diseases [171,177,178]. Although CD147 has been studied in the context of viral pathogenesis and tumorigenesis, the significance of its polymorphisms in viral infection outcome remains to be determined [168]. Recent studies have suggested that CD147 can serve as an alternative receptor for SARS-CoV-2 [179]. Although some studies have investigated ACE-2 polymorphisms that enhance or diminish binding of the S protein to ACE-2, polymorphisms that affect S protein binding to CD147 have not been reported in SARS-CoV-2 infection. Meanwhile, Wu et al. have reported that the polymorphism T/A (rs8259) in the 3′-UTR of the CD147 gene, which interacts with miR-492, alters the expression of CD147 [178]. However, no studies have been conducted to investigate the relationship between SNPs of CD147 or miRNA-492 and the risk of SARS-CoV-2 infection. miRNA-492 is encoded on chromosome 12q22 and binds to complementary sequences in the 3΄-UTRs of target mRNAs. Dipeptidyl peptidase 4 (DPP4) Dipeptidyl peptidase 4 (DPP4), also known as CD26, is encoded by the DPP4 gene, which is located on chromosome 2q24.2 [180]. DPP4 is a serine exopeptidase that is expressed on the surface of most cell types and cleaves X-alanine or X-proline dipeptides from the N-termini of polypeptides. DPP4 has been identified as the cellular entry receptor for MERS-CoV. Current data suggest that CD26 does not act as a receptor for SARS-COV-2 [181]. However, one study has shown that polymorphisms in the rs13015258 locus in the CD26 gene affect the expression of key regulatory genes related to internalization of SARS-CoV-2 into the host cell [62]. Neuropilin 1 Understanding the mechanisms and pathways involved in the cellular entry of SARS-CoV-2 is essential to delineate virus tropism to develop preventive strategies. In addition to ACE-2 and CD147, receptor for advanced glycation endproducts (RAGE), and olfactory receptors, recent experiments have suggested neuropilin 1 (NRP-1) to be a new mediator of SARS-CoV-2 entry in the nervous system [181,182]. An in vitro study confirmed the direct binding of the S1 CendR motif of SARS-CoV-2 NRP-1 [181]. Moreover, it has been shown that NRP-1 binds to furin-cleaved substrates and consequently enhances the infectivity of SARS-CoV-2. This mechanism can be targeted by NRP-1 monoclonal antibodies to block entry of the cell by the virus [182]. NRP-1 is a type I transmembrane protein, and its gene is located on chromosome 10p11.22 [153]. NRP-1 is involved in providing cues for axonal guidance and neuronal development. Furthermore, it has been identified as a co-receptor for multiple ligands, such as vascular endothelial growth factor (VEGF), semaphorins (SEMA), and transforming growth factor beta (TGF-β) [184,185]. A higher level of expression of NRP-1 has been detected in the upper respiratory tract and olfactory epithelium covering the nasal cavity [186]. Similar polybasic furin-type cleavage sites (RRAR^S) in the S1-S2 junction of the spike glycoproteins have been observed in other human viruses including Ebola virus, HIV-1, and highly virulent strains of avian influenza virus. In contrast, a similar site was found to be lacking in SARS-CoV-1 [187,188]. There are a growing number of studies investigating the role of NRP-1 in the immune response. Hwang et al. demonstrated that NRP-1 regulates the secondary CD8 T cell response to viral infections. NRP-1 was also found to be important for the function of regulatory T (Treg) cells [189]. Overexpression of NRP-1 in T-reg cells enhances their interactions with dendritic cells, which attenuated the immune responses in the absence of danger signals. Based on these findings, polymorphisms in the NRP-1 gene might affect the immune response to SARS-CoV-2 infection by inhibiting antigen presentation in the lymph nodes and subsequent viral clearance. In one study, 11 functional SNPs were reported in the NRP-1 gene, which have been associated with various clinical conditions. For instance, a polymorphism at rs2228638 was shown to be associated with an increased risk of cyanotic congenital heart disease [190]. Recently, it was reported that binding of miR-338 to the 3'-UTR of NRP-1 significantly inhibits the expression of NRP-1. Since the rs10080 SNP is located in the 3'-UTR region, it might affect the expression pattern of NRP-1 [190]. For instance, it has been reported that the G allele of rs10080 can downregulate the expression of NRP-1 [191]. Therefore, individuals carrying the G allele might express lower levels of NRP-1 on the target cells, which alters the neuropathogenesis associated with COVID-19 disease. More investigations regarding the effect of SNPs in NRP-1 on the pathogenesis of SARS-CoV-2 infection in other tissues are recommended. Heme oxygenase 1 Heme oxygenase 1 (HO-1, HMOX1) is located on chromosome 22q12.3 [191]. The heme oxygenase system is an antiinflammatory cytoprotecting system that includes HO-1 and HO-2. It degrades heme to bilirubin, free iron, and carbon monoxide and plays an important role in the antioxidant and antiapoptotic activity of the cells [192]. The high-affinity binding of the SARS-CoV-2 spike protein to porphyrin [193] upregulates the formation of reactive oxygen species (ROS) and free heme and decreases the level of HO-1 [192,194,195]. Therefore, the SARS-CoV-2-porphyrin complex may cause impairment of HO-1 signaling by downregulating HO-1 gene expression, which results in severe oxidative stress induced by free heme and iron [192,194,196]. It has been reported that a genetic polymorphism, a di-nucleotide repeat of GT, in the promoter region of the HO-1 affects the transcription of HO-1 and might be associated with the COVID-19-induced cytokine storm [192,[197][198][199][200][201][202]. In this regard, Fakhouri et al. concluded that individuals with longer GT repeats in the HO-1 promoter were at higher risk of developing severe COVID-19 disease [192]. Moreover, it has been reported that the T allele at the locus rs2071746 of the HO-1 gene can modulate the expression of HO-1 and influence the severity of COVID-19 disease [203]. Apolipoprotein L1 Apolipoprotein L1 (APOL1), a member of APOL gene family, is encoded by a gene located on chromosome 22q12.3 [204]. APOLs play important roles in lipid transport and metabolism, innate immunity, apoptosis, and autophagy [205][206][207][208][209][210][211]. Furthermore, APOL1 is involved in inflammatory and pro-inflammatory responses. For instance, cytokines such as TNF-α and IFN-γ can upregulate APOL1 expression [210]. Recently, the relationship between COVID-19 disease and APOL1 polymorphism in African patients was reported [212]. These researchers emphasized the potential key role of G1 and G2 alleles in the formation of collapsed focal segmental glomerulosclerosis (FSGS) related to SARS-CoV-2 [212]. Moreover, Larsen et al. reported that CG variants of the APOL1 gene were associated with COVID-19-related severity of kidney disease [213]. The CG genotype has not been reported in Chinese and European populations. Therefore, the high-risk APOL1 genotypes might exist only in African populations [213,214]. Vitamin K epoxide reductase complex subunit 1 The vitamin K epoxide reductase complex 1 (VKORC1) gene is located on chromosome 16p11.2 [215]. Polymorphism at the regulatory -1639A locus of the VKORC1 gene is common in the East Asian population and is associated with low vitamin K turnover [216]. Based on reported data, it has been proposed that VKORC1-1639A polymorphisms (rs9923231) are associated with protection against thrombotic complications of COVID-19 infection, which might partially explain the differences in the severity of COVID-19 infection between Western and Eastern countries [217]. ABO blood group system The ABO blood group system is based on the presence or absence of A and B glycan antigens on red blood cells. A particular ABO blood type might confer resistance to infectious diseases. Several studies found an association between the ABO blood group and susceptibility to SARS-CoV-2 infection. For instance, individuals with blood group O had a lower risk of infection than those with blood group A [130,218]. Moreover, SNP rs657152 at the locus 9q34.2, which mapped on the ABO locus, was reported to be a genetic susceptibility locus in COVID-19 patients diagnosed with respiratory failure [219]. Conclusion In addition to being a local respiratory disease, COVID-19 is a complex multi-organ disease in which human genetic polymorphisms play a distinctive role in the disease outcome. Currently, there is insufficient and controversial knowledge about the role of gene polymorphisms in the pathogenesis of SARS-CoV-2 infection. In this review, we have provided a comprehensive overview of the relevant research data currently available. The affinity-determining variants in the ACE-2 gene, such as rs2285666, rs4646116, rs267606406, rs143936283, rs961360700, rs146676783, and rs1244687367, are associated with SARS-CoV-2 infection outcome. The expression level of TMPRSS2 plays a considerable role in determining the severity of COVID-19. Polymorphisms in the TMPRSS2 gene, including rs2070788, rs9974589, and rs7364083, are associated with increased expression of this gene. The G allele at rs10080 of the NRP-1 gene might be associated with lower expression of the NRP-1 protein on target cells and might result in decreased COVID-19 pathogenesis. Moreover, polymorphisms in the immune-related genes, including TLR3, IFNL4, IFIH1, and CCR5, are likely to influence the outcome of COVID-19 disease. Finally, genetic polymorphisms in some other genes, including the HO-1, APOL1, VKORC1, DPP4, and DBP genes, might influence the SARS-CoV-2 infection outcome. The present review was an attempt to clarify the importance of human gene polymorphisms in the clinical outcome of SARS-CoV-2 infection. This overview provides insights for disease management and control. Author contributions All authors contributed to the writing and revision of the manuscript. All authors read and approved the final manuscript. Funding The present study was financially supported by Shiraz University of Medical Sciences (Grant no. 22169) Declarations Conflict of interest The authors declare no conflict of interest.
2021-05-04T06:18:44.558Z
2021-05-02T00:00:00.000
{ "year": 2021, "sha1": "cb99c7497696e1abc9d44e7c9f765c601d0c3c4d", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00705-021-05070-6.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3b73bfeacefe54886107e8a879940bc250667214", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263818246
pes2o/s2orc
v3-fos-license
Renal Involvement in Multisystem Inflammatory Syndrome in Children: Not Only Acute Kidney Injury Kidney involvement has been poorly investigated in SARS-CoV-2 Multisystem Inflammatory Syndrome in Children (MIS-C). To analyze the spectrum of renal involvement in MIS-C, we performed a single-center retrospective observational study including all MIS-C patients diagnosed at our Pediatric Department between April 2020 and May 2022. Demographic, clinical, pediatric intensive care unit (PICU) admission’s need and laboratory data were collected at onset and after 6 months. Among 55 MIS-C patients enrolled in the study, kidney involvement was present in 20 (36.4%): 13 with acute kidney injury (AKI) and 7 with isolated tubular dysfunction (TD). In eight patients, concomitant AKI and TD was present (AKI-TD). AKI patients needed higher levels of intensive care (PICU: 61.5%, p < 0.001; inotropes: 46.2%, p = 0.002; second-line immuno-therapy: 53.8%, p < 0.001) and showed lower levels of HCO3- (p = 0.012), higher inflammatory markers [neutrophils (p = 0.092), PCT (p = 0.04), IL-6 (p = 0.007)] as compared to no-AKI. TD markers showed that isolated TD presented higher levels of HCO3- and lower inflammatory markers than AKI-TD. Our results indicate a combination of both pre-renal and inflammatory damage in the pathogenesis of kidney injury in MIS-C syndrome. We highlight, for the first time, the presence of tubular involvement in MIS-C, providing new insights in the evaluation of kidney involvement and its management in this condition. Introduction Multisystem Inflammatory Syndrome in Children (MIS-C) is a severe hyperinflammatory disease occurring 2-6 weeks after SARS-CoV-2 infection or exposure.This condition was described for the first time in April 2020 by clinicians in the United Kingdom [1].The Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) have subsequently proposed case definition criteria for MIS-C which lay out a clinical picture of persistent fever, elevation of inflammatory markers, evidence of two affected organ systems (e.g., cardiac, gastrointestinal, renal, hematologic, dermatologic and neurologic) with exclusion of other causes.Affected patients are less than 21 years old and present a positive history of a previous COVID-19 infection or exposure [2,3].Clinical manifestations range from milder symptoms including fever, rashes and gastrointestinal symptoms to life-threatening conditions [4,5].Although multiple systems can be affected in MIS-C, the cardiovascular one is mostly involved with high rates of cardiogenic shock and PICU admission [4,5]. In adults with primary SARS-CoV-2 infection, renal complications have been widely reported and include acute kidney injury (AKI), proteinuria and hematuria [6].Up to one quarter of hospitalized infected patients developed AKI, which has been identified as a negative prognostic marker [7].During acute infection, a direct cytopathic role of SARS-CoV-2 resulting in kidney damage has been hypotized [6], although recent studies have also suggested other pathogenetic mechanisms such as renin-angiotensin-aldosterone system (RAS) imbalance, cytokine storm, endothelial dysfunction and/or hemodynamic instability [8].Angiotensin-converting enzyme type 2 (ACE2-R), the proposed receptor for SARS-CoV-2, is expressed in the kidney epithelium, especially in proximal tubular cells [9,10].Few studies have analyzed the characteristics of tubular involvement in SARS-CoV-2 infected adults and children [11][12][13][14].Kidney biopsy revealed variable degrees of tubular necrosis and, less frequently, glomerular involvement [9].SARS-CoV-2 detection in urine samples of infected patients have been also described [15], and a study performed on autopsies of 26 COVID-19 patients demonstrated virus particles in the epithelium of proximal tubules and podocytes [10].These considerations have suggested that kidney damage may be directly induced by the virus during acute infection, although the exact viral cytopathic mechanism actually remains not completely understood. Kidney involvement in MIS-C has been even less explored so far.According to a recent systematic review and meta-analysis, up to one fifth of children with MIS-C develops AKI with the need of kidney replacement therapy in 5-37% of cases [20].Moreover, AKI seems to be a negative prognostic factor since it is associated with higher probability of death and pediatric intensive care unit (PICU) admission [20,21]. The pathogenesis of kidney involvement in MIS-C is still unclear and probably multifactorial.The main proposed mechanisms include a hyperimmune response, renal hypoperfusion due to dehydration and cardiac dysfunction, RAS imbalance, endothelial dysfunction and drug toxicity [22]. In addition, few studies reported the presence of hematuria, proteinuria and pyuria in patients with MIS-C [23][24][25].To the best of our knowledge, no study has focused so far on tubular dysfunction children with COVID-19-related multisystem inflammatory syndrome. This study aims to describe the spectrum of kidney involvement in a cohort of MIS-C patients and to discuss possible pathogenetic mechanisms. Materials and Methods We performed a single-center retrospective observational study enrolling all patients diagnosed with MIS-C at the Pediatric Department of Padua between April 2020 and May 2022.Patients were diagnosed according to WHO's MIS-C case definition [3].We collected data regarding demographic, clinical, laboratory characteristics and types of treatment at clinical onset and after 6 months. According to the KDIGO, AKI was defined as the elevation of serum creatinine at least 1.5 times compared to the normal baseline for age and/or reduced urine output at least <0.5 mL/kg/h.Tubular dysfunction (TD) was defined by the elevation of acute TD marker (urinary NAG, uNAG, and urinary creatinine ratio).Polyuria was defined as urinary output > 4 mL/kg/h.Hypophosphatemia was defined according to phosphate range for age indicated by our laboratory standard.We considered a tubular reabsorption phosphate (TRP) value below 80% as an indicator of increased phosphaturia.uNAG values and TRP were analyzed during patient hospitalization based on clinical needs, with different times of evaluation, as shown in Table 1.Proteinuria was evaluated in 24 h urine sample, considering pathological value > 10 mg/Kg/day.Hematuria was considered positive or negative based on urine dipsticks performed on admission.Gastrointestinal involvement was defined as abdominal pain and/or diarrhea and/or vomiting. Cardiovascular involvement was defined as systolic dysfunction: hypotension (systolic pressure < 5 • percentile for age) and/or left ventricular ejection fraction (LV-EF) reduction < 55% and/or left ventricular global longitudinal strain (LV-GLS) depression < −18 and/or elevation of cardiac markers according to our laboratory reference value (Troponin I > 32 ng/L, Brain Natriuretic Peptide > 100 ng/L). Therapeutic approaches included intravenous immunoglobulins (IvIg), corticosteroids and acetylsalicylic acid as first-line therapy.Second-line therapy with biological medication (IL-1 receptor antagonist) was chosen in case of severe and/or persistent symptoms. Patients were divided into four groups according to the type of kidney involvement: AKI, no-AKI, isolated TD and AKI associated with TD (AKI-TD). Categorical variables were compared with Fisher's or Pearson's χ 2 test, and continuous variables were compared with Student's t-test. Results In a two-year time, we diagnosed 55 patients with MIS-C at our Pediatric Department: 21 females (38.2%) and 34 males (61.8%).The mean age at diagnosis was 8 years (range 1.2-17.5).Associated comorbidities were described in four patients: Moebius Syndrome with cerebral palsy (1 case), type 1 diabetes (1 case), Gitelman Syndrome (1 case) and PFAPA syndrome (1 case).Table 2 summarized the main clinical characteristics and laboratory profile of our MIS-C cohort.As reported in Table 2, gastrointestinal involvement was present in 78% of the patients with increased fluid losses due to diarrhea and/or vomit in 63%.Cardiovascular involvement was found in 89%, mainly as systolic dysfunction (Troponin elevation in 46% and pathological pro-BNP values in 83%).Mucocutaneous manifestations were detected in 80% of cases, while only five patients (9%) had central nervous system involvement (three encephalopathy with seizures, two headaches). Kidney involvement was reported in 20 cases: 13 AKI and 7 TD.Interestingly, eight of these patients had both AKI and TD combined.Table 2 describes the tools used to investigate kidney involvement.Considering the whole MIS-C cohort, AKI incidence was 23% (13/55).All patients presented with AKI on hospital admission, 10/13 showed rapid recovery within three days and none required renal replacement therapy.In three patients, complete resolution required longer time (maximum 6 days).In the AKI group: 11/13 (84.6%) had stage 1 AKI, and 2/13 (15.3%) presented stage 3 AKI.Nephrotic proteinuria (>50 mg/kg/die) was reported in two patients, both presenting with AKI stage 3 and positive u-NAG.u-NAG was tested in only 19 patients, and 15 (78%) had pathological values, of whom 7 (46%) had interestingly isolated TD without AKI.Considering urinary output, polyuria was detected in 16% of patients during hospitalization, five with elevated u-NAG.At 6 months, only 1 case presented sequelae with low-grade hypertension.This patient, affected by Moebius syndrome and cerebral palsy, presented a severe kidney involvement (AKI grade 3) at disease onset, nephrotic proteinuria and needed PICU admission for inotropic support.All the other patients with kidney involvement presented with complete recovery at 6 months. No patients presented with macro-hematuria.At the urine dipstick performed on hospital admission we detected micro-hematuria in 8/35 (22%).No patients presented with glycosuria. No renal biopsies were performed. In our MIS-C cohort, patients with renal involvement were subdivided into four groups: AKI, No-AKI, TD-AKI, isolated TD. If we consider patients with AKI and No-AKI, the main difference consisted of a more severe disease course of AKI patients since they more frequently required admission to PICU (61.5%, p < 0.001), hemodynamic support with inotropes (46.2%, p = 0.002) and secondline treatment with IL-1 inhibitors (53.8%, p < 0.001).This clinical evidence was associated with higher inflammatory markers such as neutrophils count (p = 0.092), procalcitonin level (p = 0.04) and serum IL-6 (p = 0.007) (Table 3).Interestingly, we identified seven patients with isolated-TD, without associated AKI.This group, compared to patients in whom TD was associated with AKI (TD-AKI), pre-sented higher levels of HCO3-, lower inflammatory markers and no need for intensive care or second line therapy with anti-IL 1 inhibitors (Table 4).Finally, in the whole MIS-C cohort we found a high rate of patients with hypophosphatemia (38/51, 75%; mean value 1.15 mmol/L).Inappropriate phosphaturia, which is a sign of tubular dysfunction, was reported in 20% of tested cases.Hypophosphatemia was associated with higher values of IL6 (p = 0.044) and need for inotropic support (p = 0.05) (Table 5). Discussion In the pediatric population, AKI has been described with higher frequency in MIS-C patients than in acute SARS-CoV-2 infection [23], with a reported incidence varying between studies.A recent systematic review and meta-analysis stated that up to 20% of patients with MIS-C developed AKI [20].Consistent with this data, in our study, we found AKI in 23% of patients.In our study, AKI presented a self-limiting course with no patients needing renal replacement therapy and only one presenting with hypertension at 6-month followup.Our results are different from other reports, in which renal replacement therapy has been reported in up to 15% of patients with MIS-C and kidney involvement [20][21][22]26,27].Since in our cohort AKI subgroup presented with a more severe disease course, requiring more frequently intensive care support and second line immunomodulatory therapy, we may suppose that a prompt diagnosis and support with a more aggressive therapeutical approach may have favored a complete recovery in our patients. Possible causes of kidney injury in MIS-C include cardiac dysfunction, hypovolemia, cytokine storm, endothelial dysfunction, rhabdomyolysis, and nephrotoxic drugs [8,27,28].It has been reported that AKI in MIS-C is associated with higher inflammatory markers, greater rates of systolic dysfunction, need of inotropes and lower levels of albumin and bicarbonates, suggesting a prerenal component in the pathogenesis of kidney insufficiency in this syndrome [26,28]. In particular, some Authors have described higher values of inflammatory biomarkers such as white blood cells, CRP, procalcitonine, D-dimer, ferritin and IL-6 in MIS-C patients presenting with AKI.In the same studies association with systolic dysfunction, need of inotropes and lower levels of albumin and bicarbonates have been observed [25,26,28] Therefore, these data suggested a double component in the pathogenesis of kidney injury in MIS-C, due to both an inflammatory pathway and prerenal injury [26]. In our study, patients with AKI demonstrated higher degrees of inflammatory markers (neutrophils, PCT, IL-6) and presented with clinical and biochemical data consistent with hypovolemia (history of vomit/diarrhea, lower serum bicarbonates).Our study also confirms that AKI is associated with a more severe course with higher need of intensive care, inotropes, and second-line immunomodulatory therapy with anti-IL1 inhibitors [19,25,28].Thus, our results support the hypothesis that kidney injury in MIS-C is sustained both by inflammation and hypovolemia, reinforcing the importance of a prompt and appropriate management with hemodynamic support and immunomodulators to prevent the kidney dysfunction.Furthermore, these two mechanisms may be closely related to each other since hyperinflammation may contribute to a capillary leak syndrome worsening the hypovolemic state and the prerenal injury. While the available literature on kidney involvement in MIS-C is mostly focused on AKI, a few reports have described acute tubular involvement in pediatric patients during SARS-CoV-2 infection but never during MIS-C [9,14,16].Devrim at al. described TD in 20.9% of hospitalized children with SARS-CoV-2 infection, excluding those needing intensive care [14].Tas et al. reported high levels of urinary beta 2-microglobulin and urinary IL-6 during the acute phase of infection in COVID-19-positive children [16]. Furthermore, few authors have reported tubular dysfunction in acute SARS-CoV-2 infection also in adults.Werion et al. detected elevation of B2-microglobulin in 69% of patients affected by COVID-19 and reported defective reabsorption of phosphate in (19%) and uric acid (46%) [13].Gustavo et al. observed high excretion of calcium, sodium, phosphate and alkaline pH in urine samples of adults infected with SARS-CoV2 suspecting a tubular dysfunction.In this study, tubular markers such as u-NAG were not studied [12].Fukao et al. analyzed tubular injury markers (U-NAG, L-FABP, u-B2microglobulin, u-alfa-1-microglobuline) in patients with SARS-CoV2 infection without AKI: patients with severe infection (defined as need of oxygen therapy) had significantly higher tubular damage markers and levels of IL-6, suggesting the possible contributing role of cytokine storm in tubular injury [12]. We investigated TD because we clinically observed that some children with MIS-C presented an increased urinary output and hypophosphatemia.When tubular markers were investigated, we detected a high frequency of pathological findings (u-NAG).In patients with previous AKI, TD was suspected and investigated a few days after AKI development (Table 1).In patients with isolated TD, we noticed lower rates of hypovolemia and inflammatory markers (IL-6, PCT) and no need for intensive care and second-line im-munomodulatory therapy as compared to AKI-TD subset (Table 4).However, considering that tubular cells are very vulnerable to ischemic damage, we might suppose that tubules represent the first kidney component to be early affected in MIS-C patients, although not further evolving into AKI. We also noticed a high percentage of patients with serum hypophosphatemia in our MIS-C cohort.TRP was calculated for 23 patients.In 19 patients with hypophosphatemia and contextual evaluation of phosphate excretion, 20% had inappropriately high excretion of phosphate (TRP < 80%).Hypophosphatemia spontaneously resolved after immunomodulatory therapy for MIS-C without the need for phosphate supplementation. Low serum levels of phosphate have been already described in critically ill children and has been correlated with PICU hospitalization, ventilation, or malnutrition [29][30][31].The physiopathology of the phenomenon is still uncertain.In our study, we found that low levels of serum phosphate seem to be not related to tubular dysfunction but more probably to the inflammatory state and a more severe course, given the positive correlation of hypophosphatemia with higher serum levels of IL-6 and use of inotropes (Table 5). Considering the cytokines' profile, in our cohort, we found that MIS-C patients with AKI had significantly higher levels of serum IL-6 (Table 3).In SARS-CoV-2-related conditions, there is growing interest in cytokines profiles and IL-6 has been considered one of the major contributing factors of "cytokine storm" [32], so that in critically ill patients, IL-6 inhibitors have been used as possible targeted-therapy [33].IL-6 antagonist has been also proposed as rescue therapy in MIS-C patients who did not respond to IL1 inhibitors [34].The diagnostic and prognostic significance of IL-6 levels in MIS-C is not clearly understood.Experimental studies showed IL-6 activation and secretion by podocytes, endothelial cells, mesangial cells, and tubular epithelial cells in inflammatory diseases with kidney involvement [35].In animal models with ischemic AKI, increased IL-6 transcription and signaling has been demonstrated both locally and systemically, suggesting IL-6 as a potential biomarker and therapeutic target in ischemic AKI [36]. Treatment with IvIg is an already described cause of renal injury, mostly reported in patients with pre-existing renal disease, hypertension, diabetes mellitus, volume depletion and concomitant use of nephrotoxic drugs [37][38][39][40].Osmotic injury, glomerular precipitation of immune complexes, acute tubular obstruction and renal hypoperfusion have been proposed as possible pathogenetic mechanisms of kidney damage [37].Renal oliguric dysfunction is the most common manifestation, occurring within 10 days after IvIg infusion, with a maximum increase in serum creatinine on day 5 and complete recovery within 15 days [37,38].In our AKI cohort, renal dysfunction was already present on admission, before IvIg administration, and none presented with worsening of renal function after the infusion.Based on our experience, patients with isolated-TD presented a positive clinical course and less risk factors for IvIg-related kidney damage, as described above.On the other side, we cannot completely exclude a possible contributing role of IvIg in tubular involvement in this subset of patients.Further studies with larger cohorts are needed to better understand the possible association between TD and exposure to IvIg in MIS-C patients. Our study presents some limitations.Firstly, it is a single-center retrospective study.Although being monocentric favored uniformity in data collection, the lack of a routinary protocol for tubular markers' screening in all patients at admission, during active disease and follow-up resulted in small samples not always statistically comparable.Moreover, TD markers' research was based on clinical needs with a possible selection bias causing overestimation of the rates of pathological results. To the best of our knowledge, however, this is the first study to show tubular dysfunction in MIS-C patients, providing new insights in the evaluation of kidney injury in this condition.We suggest evaluating tubular markers in all MIS-C patients at admission and during the active phase of the disease.Our results also confirmed the high prevalence of kidney involvement in MIS-C, especially in patients with a severe course, supporting both pre-renal and inflammatory pathogenetic causes for AKI in this syndrome. Table 1 . Tools to evaluate kidney involvement. Table 2 . Clinical characteristics and laboratory findings of MIS-C cohort. Table 3 . Clinical characteristics and laboratory profile in AKI and No-AKI patients. Table 4 . Clinical characteristics and laboratory profile of MIS-C patients with tubular disease. Table 5 . Main clinical and laboratory findings in patients with and without hypophosphatemia.
2023-10-11T15:15:41.883Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "fb0549b55e505884bce63b1a38b42d11d9320fae", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/10/10/1661/pdf?version=1696671152", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f8bfe80c2bdd15b01cf904e829ec4f97cc46d0f", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
9007028
pes2o/s2orc
v3-fos-license
A Multimodal Interface for Access to Content in the Home In order to effectively access the rapidly increasing range of media content available in the home, new kinds of more natural interfaces are needed. In this paper, we explore the application of multimodal interface technologies to searching and browsing a database of movies. The resulting system allows users to access movies using speech, pen, remote control, and dynamic combinations of these modalities. An experimental evaluation, with more than 40 users, is presented contrasting two variants of the system: one combining speech with traditional remote control input and a second where the user has a tablet display supporting speech and pen input. Introduction As traditional entertainment channels and the internet converge through the advent of technologies such as broadband access, movies-on-demand, and streaming video, an increasingly large range of content is available to consumers in the home. However, to benefit from this new wealth of content, users need to be able to rapidly and easily find what they are actually interested in, and do so effortlessly while relaxing on the couch in their living room -a location where they typically do not have easy access to the keyboard, mouse, and close-up screen display typical of desktop web browsing. Current interfaces to cable and satellite television services typically use direct manipulation of a graphical user interface using a remote control. In order to find content, users generally have to either navigate a complex, pre-defined, and often deeply embedded menu structure or type in titles or other key phrases using an onscreen keyboard or triple tap input on a remote control keypad. These interfaces are cumbersome and do not scale well as the range of content available increases (Berglund, 2004;Mitchell, 1999). Figure 1 Multimodal interface on tablet In this paper we explore the application of multimodal interface technologies (See André (2002) for an overview) to the creation of more effective systems used to search and browse for entertainment content in the home. A number of previous systems have investigated the addition of unimodal spoken search queries to a graphical electronic program guide (Ibrahim and Johansson, 2002 (NokiaTV); Goto et al., 2003;Wittenburg et al., 2006). Wittenburg et al experiment with unrestricted speech input for electronic program guide search, and use a highlighting mechanism to provide feedback to the user regarding the "relevant" terms the system understood and used to make the query. However, their usability study results show this complex output can be confusing to users and does not correspond to user expectations. Others have gone beyond unimodal speech input and added multimodal commands combining speech with pointing (Johansson, 2003;Portele et al, 2006). Johansson (2003) describes a movie recommender system MadFilm where users can use speech and pointing to accept/reject recommended movies. Portele et al (2006) describe the Smart-Kom-Home system which includes multimodal electronic program guide on a tablet device. In our work we explore a broader range of interaction modalities and devices. The system provides users with the flexibility to interact using spoken commands, handwritten commands, unimodal pointing (GUI) commands, and multimodal commands combining speech with one or more pointing gestures made on a display. We compare two different interaction scenarios. The first utilizes a traditional remote control for direct manipulation and pointing, integrated with a wireless microphone for speech input. In this case, the only screen is the main TV display (far screen). In the second scenario, the user also has a second graphical display (close screen) presented on a mobile tablet which supports speech and pen input, including both pointing and handwriting (Figure 1). Our application task also differs, focusing on search and browsing of a large database of movies-ondemand and supporting queries over multiple simultaneous dimensions. This work also differs in the scope of the evaluation. Prior studies have primarily conducted qualitative evaluation with small groups of users (5 or 6). A quantitative and qualitative evaluation was conducted examining the interaction of 44 naïve users with two variants of the system. We believe this to be the first broad scale experimental evaluation of a flexible multimodal interface for searching and browsing large databases of movie content. In Section 2, we describe the interface and illustrate the capabilities of the system. In Section 3, we describe the underlying multimodal processing architecture and how it processes and integrates user inputs. Section 4 describes our experimental evaluation and comparison of the two systems. Section 5 concludes the paper. Interacting with the system The system described here is an advanced user interface prototype which provides multimodal access to databases of media content such as movies or television programming. The current database is harvested from publicly accessible web sources and contains over 2000 popular movie titles along with associated metadata such as cast, genre, director, plot, ratings, length, etc. The user interacts through a graphical interface augmented with speech, pen, and remote control input modalities. The remote control can be used to move the current focus and select items. The pen can be used both for selecting items (pointing at them) and for handwritten input. The graphical user interface has three main screens. The main screen is the search screen ( Figure 2). There is also a control screen used for setting system parameters and a third comparison display used for showing movie details side by side ( Figure 4). The user can select among the screens using three icons in the navigation bar at the top left of the screen. The arrows provide 'Back' and 'Next' for navigation through previous searches. Directly below, there is a feedback window which indicates whether the system is listening and provides feedback on speech recognition and search. In the tablet variant, the microphone and speech recognizer are activated by tapping on 'CLICK TO SPEAK' with the pen. In the remote control version, the recognizer can also be activated using a button on the remote control. The main section of the search display ( Figure 2) contains two panels. The right panel (results panel) presents a scrollable list of thumbnails for the movies retrieved by the current search. The left panel (details panel) provides details on the currently selected title in the results panel. These include the genre, plot summary, cast, and director. The system supports a speech modality, a handwriting modality, pointing (unimodal GUI) modality, and composite multimodal input where the user utters a spoken command which is combined with pointing 'gestures' the user has made towards screen icons using the pen or the remote control. Speech: The system supports speech search over multiple different dimensions such as title, genre, cast, director, and year. Input can be more telegraphic with searches such as "Legally Blonde", "Romantic comedy", and "Reese Witherspoon", or more verbose natural language queries such as "I'm looking for a movie called Legally Blonde" and "Do you have romantic comedies". An important advantage of speech is that it makes it easy to combine multiple constraints over multiple dimensions within a single query (Cohen, 1992). For example, queries can indicate co-stars: "movies starring Ginger Rogers and Fred Astaire", or constrain genre and cast or director at the same time: "Meg Ryan Comedies", "show drama directed by Woody Allen" and "show comedy movies directed by Woody Allen and starring Mira Sorvino". Handwriting: Handwritten pen input can also be used to make queries. When the user's pen approaches the feedback window, it expands allowing for freeform pen input. In the example in Figure 3, the user requests comedy movies with Bruce Willis using unimodal handwritten input. This is an important input modality as it is not impacted by ambient noise such as crosstalk from other viewers or currently playing content. Figure 3 Handwritten query Navigation Bar Feedback Window Pointing/GUI: In addition to the recognitionbased modalities, speech and handwriting, the interface also supports more traditional graphical user interface (GUI) commands. In the details panel, the actors and directors are presented as buttons. Pointing at (i.e., clicking on) these buttons results in a search for all of the movies with that particular actor or director, allowing users to quickly navigate from an actor or director in a specific title to other material they may be interested in. The buttons in the results panel can be pointed at (clicked on) in order to view the details in the left panel for that particular title. Figure 4 Comparison screen Composite multimodal input: The system also supports true composite multimodality when spoken or handwritten commands are integrated with pointing gestures made using the pen (in the tablet version) or by selecting items (in the remote control version). This allows users to quickly execute more complex commands by combining the ease of reference of pointing with the expressiveness of spoken constraints. While by unimodally pointing at an actor button you can search for all of the actor's movies, by adding speech you can narrow the search to, for example, all of their comedies by saying: "show comedy movies with THIS actor". Multimodal commands with multiple pointing gestures are also supported, allowing the user to 'glue' together references to multiple actors or directors in order to constrain the search. For example, they can say "movies with THIS actor and THIS director" and point at the 'Alan Rickman' button and then the 'John McTiernan' button in turn ( Figure 2). Comparison commands can also be multimo-dal; for example, if the user says "compare THIS movie and THIS movie" and clicks on the two buttons on the right display for 'Die Hard' and the 'The Fifth Element' (Figure 2), the resulting display shows the two movies side-by-side in the comparison screen ( Figure 4). Underlying multimodal architecture The system consists of a series of components which communicate through a facilitator component ( Figure 5). This develops and extends upon the multimodal architecture underlying the MATCH system (Johnston et al., 2002). The underlying database of movie information is stored in XML format. When a new database is available, a Grammar Compiler component extracts and normalizes the relevant fields from the database. These are used in conjunction with a predefined multimodal grammar template and any available corpus training data to build a multimodal understanding model and speech recognition language model. The user interacts with the multimodal user interface client (Multimodal UI), which provides the graphical display. When the user presses 'CLICK TO SPEAK' a message is sent to the Speech Client, which activates the microphone and ships audio to a speech recognition server. Handwritten inputs are processed by a handwriting recognizer embedded within the multimodal user interface client. Speech recognition results, pointing gestures made on the display, and handwritten inputs, are all passed to a multimodal understanding server which uses finite-state multimodal language proc-essing techniques (Johnston and Bangalore, 2005) to interpret and integrate the speech and gesture. This model combines alignment of multimodal inputs, multimodal integration, and language understanding within a single mechanism. The resulting combined meaning representation (represented in XML) is passed back to the multimodal user interface client, which translates the understanding results into an XPATH query and runs it against the movie database to determine the new series of results. The graphical display is then updated to represent the latest query. Multimodal The system first attempts to find an exact match in the database for all of the search terms in the user's query. If this returns no results, a back off and query relaxation strategy is employed. First the system tries a search for movies that have all of the search terms, except stop words, independent of the order (an AND query). If this fails, then it backs off further to an OR query of the search terms and uses an edit machine, using Levenshtein distance, to retrieve the most similar item to the one requested by the user. Evaluation After designing and implementing our initial prototype system, we conducted an extensive multimodal data collection and usability study with the two different interaction scenarios: tablet versus remote control. Our main goals for the data collection and statistical analysis were three-fold: collect a large corpus of natural multimodal dialogue for this media selection task, investigate whether future systems should be paired with a remote control or tablet-like device, and determine which types of search and input modalities are more or less desirable. Experimental set up The system evaluation took place in a conference room set up to resemble a living room ( Figure 6). The system was projected on a large screen across the room from a couch. An adjacent conference room was used for data collection (Figure 7). Data was collected in sound files, videotapes, and text logs. Each subject's spoken utterances were recorded by three microphones: wireless, array and stand alone. The wireless microphone was connected to the system while the array and stand alone microphones were around 10 feet away. 1 Test sessions were recorded with two video cameras -one captured the system's screen using a scan converter while the other recorded the user and couch area. Lastly, the user's interactions and the state of the system were captured by the system's logger. The logger is an additional agent added to the system architecture for the purposes of the evaluation. It receives log messages from different system components as interaction unfolds and stores them in a detailed XML log file. For the specific purposes of this evaluation, each log file contains: general information about the system's components, a description and timestamp for each system event and user event, names and timestamps for the system-recorded sound files, and timestamps for the start and end of each scenario. Figure 6 Data collection environment Forty-four subjects volunteered to participate in this evaluation. There were 33 males and 11 females, ranging from 20 to 66 years of age. Each user interacted with both the remote control and tablet variants of the system, completing the same two sets of scenarios and then freely interacting with each system. For counterbalancing purposes, half of the subjects used the tablet and then the remote control and the other half used the remote 1 Here we report results for the wireless microphone only. Analysis of the other microphone conditions is ongoing. control and then the tablet. The scenario set assigned to each version was also counterbalanced. Figure 7 Data collection room Each set of scenarios consisted of seven defined tasks, four user-specialized tasks and five openended tasks. Defined tasks were presented in chart form and had an exact answer, such as the movie title that two specified actors/actresses starred in. For example, users had to find the movie in the database with Matthew Broderick and Denzel Washington. User-specialized tasks relied on the specific user's preferences, such as "What type of movie do you like to watch on a Sunday evening? Find an example from that genre and write down the title". Open-ended tasks prompted users to search for any type of information with any input modality. The tasks in the two sets paralleled each other. For example, if one set of tasks asked the user to find the highest ranked comedy movie with Reese Witherspoon, the other set of tasks asked the user to find the highest ranked comedy movie with Will Smith. Within each task set, the defined tasks appeared first, then the user-specialized tasks and lastly the open-ended tasks. However, for each participant, the order of defined tasks was randomized, as well as the order of user-specialized tasks. At the beginning of the session, users read a short tutorial about the system's GUI, the experiment, and available input modalities. Before interacting with each version, users were given a manual on operating the tablet/remote control. To minimize bias, the manuals gave only a general overview with few examples and during the experiment users were alone in the room. At the end of each session, users completed a user-satisfaction/preference questionnaire and then a qualitative interview. The questionnaire consisted of 25 statements about the system in general, the two variants of the system, input modality options and search options. For example, statements ranged from "If I had [the system], I would use the tablet with it" to "If my spoken request was misunderstood, I would want to try again with speaking". Users responded to each statement with a 5point Likert scale, where 1 = 'I strongly agree', 2 = 'I mostly agree', 3 = 'I can't say one way or the other', 4 = 'I mostly do not agree' and 5 = 'I do not agree at all'. The qualitative interview allowed for more open-ended responses, where users could discuss reasons for their preferences and their likes and dislikes regarding the system. Results Data was collected from all 44 participants. Due to technical problems, five participants' logs or sound files were not recorded in parts of the experiment. All collected data was used for the overall statistics but these five participants had to be excluded from analyses comparing remote control to tablet. Spoken utterances: After removing empty sound files, the full speech corpus consists of 3280 spoken utterances. Excluding the five participants subject to technical problems, the total is 3116 utterances (1770 with the remote control and 1346 with the tablet). The set of 3280 utterances averages 3.09 words per utterance. There was not a significant difference in utterance length between the remote control and tablet conditions. Users' averaged 2.97 words per utterance with the remote control and 3.16 words per utterance with the tablet, paired t (38) = 1.182, p = n.s. However, users spoke significantly more often with the remote control. On average, users spoke 34.51 times with the tablet and 45.38 times with the remote control, paired t (38) = -3.921, p < .01. ASR performance: Over the full corpus of 3280 speech inputs, word accuracy was 44% and sentence accuracy 38%. In the tablet condition, word accuracy averaged 46% and sentence accuracy 41%. In the remote control condition, word accuracy averaged 41% and sentence accuracy 38%. The difference across conditions was only significant for word accuracy, paired t (38) = 2.469, p < .02. In considering the ASR performance, it is important to note that 55% of the 3280 speech inputs were out of grammar, and perhaps more importantly 34% were out of the functional-ity of the system entirely. On within functionality inputs, word accuracy is 62% and sentence accuracy 57%. On the in grammar inputs, word accuracy is 86% and sentence accuracy 83%. The vocabulary size was 3851 for this task. In the corpus, there are a total of 356 out-of-vocabulary words. Handwriting recognition: Performance was determined by manual inspection of screen capture video recordings. 2 There were a total of 384 handwritten requests with overall 66% sentence accuracy and 76% word accuracy. Task completion: Since participants had to record the task answers on a paper form, task completion was calculated by whether participants wrote down the correct answer. Overall, users had little difficulty completing the tasks. On average, participants completed 11.08 out of the 14 defined tasks and 7.37 out of the 8 user-specialized tasks. The number of tasks completed did not differ across system variants. 3 For the seven defined tasks within each condition, users averaged 5.69 with the remote control and 5.40 with the tablet, paired t (34) = -1.203, p = n.s. For the four userspecialized task within each condition, users averaged 3.74 on the remote control and 3.54 on the tablet, paired t (34) = -1.268, p = n.s. Input modality preference: During the interview, 55% of users reported preferring the pointing (GUI) input modality over speech and multimodal input. When asked about handwriting, most users were hesitant to place it on the list. They also discussed how speech was extremely important, and given a system with a low error speech recognizer, using speech for input probably would be their first choice. In the questionnaire, the majority of users (93%) 'strongly agree' or 'mostly agree' with the importance of making a pointing request. The importance of making a request by speaking had the next highest average, where 57% 'strongly agree' or 'mostly agree' with the statement. The importance of multimodal and handwriting requests had the lowest averages, where 39% agreed with the former and 25% for the latter. However, in the open-ended interview, users mentioned handwriting as an important back-up input choice for cases when the speech recognizer fails. Further support for input modality preference was gathered from the log files, which showed that participants mostly searched using unimodal speech commands and GUI buttons. Out of a total of 6082 user inputs to the systems, 48% were unimodal speech and 39% were unimodal GUI (pointing and clicking). Participants requested information with composite multimodal commands 7% of the time and with handwriting 6% of the time. Search preference: Users most strongly agreed with movie title being the most important way to search. For searching by title, more than half the users chose 'strongly agree' and 91% of users chose 'strongly agree' or 'mostly agree'. Slightly more than half chose 'strongly agree' with searching by actor/actress and slightly less than half chose 'strongly agree' with the importance of searching by genre. During the open ended interview, most users reported title as the most important means for searching. Variant preference: Results from the qualitative interview indicate that 67% of users preferred the remote control over the tablet variant of the system. The most common reported reasons were familiarity, physical comfort and ease of use. Remote control preference is further supported from the user-preference questionnaire, where 68% of participants 'mostly agree' or 'strongly agree' with wanting to use the remote control variant of the system, compared to 30% of participants choosing 'mostly agree' or 'strongly agree' with wanting to use the tablet version of the system. Conclusion With the range of entertainment content available to consumers in their homes rapidly expanding, the current access paradigm of direct manipulation of complex graphical menus and onscreen keyboards, and remote controls with way too many buttons is increasingly ineffective and cumbersome. In order to address this problem, we have developed a highly flexible multimodal interface that allows users to search for content using speech, handwriting, pointing (using pen or remote control), and dynamic multimodal combinations of input modes. Results are presented in a straightforward graphical interface similar to those found in current systems but with the addition of icons for actors and directors that can be used both for unimodal GUI and multimodal commands. The system allows users to search for movies over multiple different dimen-sions of classification (title, genre, cast, director, year) using the mode or modes of their choice. We have presented the initial results of an extensive multimodal data collection and usability study with the system. Users in the study were able to successfully use speech in order to conduct searches. Almost half of their inputs were unimodal speech (48%) and the majority of users strongly agreed with the importance of using speech as an input modality for this task. However, as also reported in previous work (Wittenburg et al 2006), recognition accuracy remains a serious problem. To understand the performance of speech recognition here, detailed error analysis is important. The overall word accuracy was 44% but the majority of errors resulted from requests from users that lay outside the functionality of the underlying system, involving capabilities the system did not have or titles/cast absent from the database (34% of the 3280 spoken and multimodal inputs). No amount of speech and language processing can resolve these problems. This highlights the importance of providing more detailed help and tutorial mechanisms in order to appropriately ground users' understanding of system capabilities. Of the remaining 66% of inputs (2166) which were within the functionality of the system, 68% were in grammar. On the within functionality portion of the data, the word accuracy was 62%, and on in grammar inputs it is 86%. Since this was our initial data collection, an un-weighted finitestate recognition model was used. The performance will be improved by training stochastic language models as data become available and employing robust understanding techniques. One interesting issue in this domain concerns recognition of items that lie outside of the current database. Ideally the system would have a far larger vocabulary than the current database so that it would be able to recognize items that are outside the database. This would allow feedback to the user to differentiate between lack of results due to recognition or understanding problems versus lack of items in the database. This has to be balanced against degradation in accuracy resulting from increasing the vocabulary. In practice we found that users, while acknowledging the value of handwriting as a back-up mode, generally preferred the more relaxed and familiar style of interaction with the remote control. However, several factors may be at play here. The tablet used in the study was the size of a small laptop and because of cabling had a fixed location on one end of the couch. In future, we would like to explore the use of a smaller, more mobile, tablet that would be less obtrusive and more conducive to leaning back on the couch. Another factor is that the in-lab data collection environment is somewhat unrealistic since it lacks the noise and disruptions of many living rooms. It remains to be seen whether in a more realistic environment we might see more use of handwritten input. Another factor here is familiarity. It may be that users have more familiarity with the concept of speech input than handwriting. Familiarity also appears to play a role in user preferences for remote control versus tablet. While the tablet has additional capabilities such handwriting and easier use of multimodal commands, the remote control is more familiar to users and allows for a more relaxed interaction since they can lean back on the couch. Also many users are concerned about the quality of their handwriting and may avoid this input mode for that reason. Another finding is that it is important not to underestimate the importance of GUI input. 39% of user commands were unimodal GUI (pointing) commands and 55% of users reported a preference for GUI over speech and handwriting for input. Clearly, the way forward for work in this area is to determine the optimal way to combine more traditional graphical interaction techniques with the more conversational style of spoken interaction. Most users employed the composite multimodal commands, but they make up a relatively small proportion of the overall number of user inputs in the study data (7%). Several users commented that they did not know enough about the multimodal commands and that they might have made more use of them if they had understood them better. This, along with the large number of inputs that were out of functionality, emphasizes the need for more detailed tutorial and online help facilities. The fact that all users were novices with the system may also be a factor. In future, we hope to conduct a longer term study with repeat users to see how previous experience influences use of newer kinds of inputs such as multimodal and handwriting.
2014-07-01T00:00:00.000Z
2007-06-01T00:00:00.000
{ "year": 2007, "sha1": "acff07462f590756cc61c4c3d94273a937f23bde", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "acff07462f590756cc61c4c3d94273a937f23bde", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231627887
pes2o/s2orc
v3-fos-license
Re-defining the concept of hydration water in water under soft confinement Water shapes and defines the properties of biological systems. Therefore, understanding the nature of the mutual interaction between water and biological systems is of primary importance for a proper assessment of biological activity and the development of new drugs and vaccines. A handy way to characterize the interactions between biological systems and water is to analyze their impact on water density and dynamics in the proximity of the interfaces. It is well established that water bulk density and dynamical properties are recovered at distances in the order of $\sim1$~nm from the surface of biological systems. Such evidence led to the definition of \emph{hydration} water as the thin layer of water covering the surface of biological systems and affecting-defining their properties and functionality. Here, we review some of our latest contributions showing that phospholipid membranes affect the structural properties and the hydrogen bond network of water at greater distances than the commonly evoked $\sim1$~nm from the membrane surface. Our results imply that the concept of hydration water should be revised or extended, and pave the way to a deeper understanding of the mutual interactions between water and biological systems. I. INTRODUCTION Water is a peculiar substance characterized by a plethora of dynamic and thermodynamic anomalies that make it the only liquid capable to sustain life as we know it 1-3 . For example, the very large heat capacity allows water to absorb and release heat at much slower rates compared to similar materials like silica. As a consequence, water acts as a thermostat that regulates the temperature of our bodies and, overall, of our planet sheltering us from otherwise lethal daily and seasonal temperature variations. Water has also a very low compressibility, that allows blood to be pumped without crystallizing down to the most peripherals and tight vessels delivering oxygen. Nonetheless, water stabilizes proteins and DNA restricting the access to unfolded states, and shapes the basic structure of cells membranes. Cells membranes are very complex systems made of a large number of components, including proteins, cholesterol, glycolipids and ionic channels among others, but their framework is provided by phospholipid molecules forming a bilayer. Being solvated by water, the hydrophilic heads of the phospholipid molecules are exposed to the surrounding solvent molecules, while the hydrophobic tails are arranged side by side hiding from water and extending in the region between two layers of heads. Stacked membranes are important constituents in several biological structures, including endoplasmic reticulum and Golgi apparatus, that processes proteins for their use in animal cells, or a) Electronic mail: fausto.martelli@ibm.com b) Electronic mail: carles.calero@ub.edu; gfranzese@ub.edu thylakoid compartments in chloroplasts and cyanobacteria, involved in photosynthesis. When in contact with membranes, water modulates their fluidity and mediates the interaction between different membranes as well as between membranes and solutes (ions, proteins, DNA, etc.), regulating cell-membrane tasks such as, e.g., transport and signaling functions 4 . A thin layer of water, with a thickness of only ∼ 1 nm corresponding to a couple of molecular diameters, hydrates biological systems and is therefore called biological, or hydration water 5 . So far, it has been thought that hydration water is directly responsible for the proper functioning of biological systems 2 , although many issues are still open 5 . Several experimental techniques have been adopted to study the interaction between hydration water molecules and membrane surfaces. Insights on the orientation of water molecules and on their order have been obtained from vibrational sum frequency generation spectroscopy and nuclear magnetic resonance (NMR) experiments 6,7 . Evidences of enhanced hydrogen bonds (HBs) established between water molecules and the phospholipid heads have been described in experimental investigations from infrared spectroscopy 7,8 . Nonetheless, far-infrared spectroscopy has shown that resonance mechanisms entangle the motion of phospholipid bilayers with their hydration water 9 . Such complex interactions between water molecules and hydrophobic heads cause perturbations in the dynamical properties of water. NMR spectroscopy has reported a breakdown of the isotropy on the lateral and normal diffusion of water molecules with respect to the surface 10,11 , and rotational dynamics has been the focus of several experimental investigations using ultrafast vibrational spectroscopy 12 , terahertz spectroscopy 13 and neutron scattering 14 . Atomistic molecular dynamics (MD) simulations have also been widely adopted to inspect the microscopic details of hydration water (with the obvious drawback of relying on a particular simulation model). The dynamical slow-down of water dynamics due to the interaction with phospholipid membranes reported in NMR experiments 10,11 has been confirmed in MD simulations 15,16 . MD simulations have also provided important insights on the molecular ordering and rotation dynamics in water solvating phospholipid headgroups 15,17 , as well as in quantifying -introducing correlation functions-the decay of water orientational degrees of freedom [18][19][20][21][22] . We here review some of our recent computational investigations on water nanoconfined between stacked phospholipid membranes, reporting evidences that the membrane affects the structural properties of water and its hydrogen bond network at distances much larger than the often invoked ∼ 1 nm. Our results are the outcome of MD simulations of water nanoconfined in phospholipid membranes. Water is described via a modified TIP3P 23 model of water. As a typical model membrane, we have used 1,2-Dimyristoyl-sn-glycero-3phosphocholine (DMPC) lipids. The DMPC is a phospholipid with a hydrophobic tail formed of two myristoyl chains and a hydrophilic head, containing a phosphate and a choline, where the N atom interacts mostly with water oxygen atoms and the P atom interacts mostly with the hydgrogen atoms. Choline-based phospholipids are ubiquitous in cell membranes and commonly used in drug-targeting liposomes 4 . In Fig. 1 we report a representative snapshot of the water-DMPC system. As observed in Ref. 21 , at ambient conditions the den- Density profile ρ of water molecules as a function of the instantaneous local distance ξ from the membrane interface at ambient conditions (T = 303 K, average pressure 1 atm, corresponding to bulk density ρ = 1g/cm 3 ) and with at hydration level, defined as the number of water molecules per phospholipid, ω = 34. Water at ξ < 0 belongs to the interior of the membrane, while that at ξ > 5Å has the same density as the bulk and can be associated to the exterior of the membrane. The density of water at 0 < ξ < 5Å shows a clear maximum revealing the presence of a hydration layer 20 . At higher density we observe more than one hydration layer. sity profile of water molecules as function of the distance with respect to the average position of the phosphorus atoms in the DMPC lipids displays no layered structure. In fact, due to the thermal fluctuations, it forms a smeared out interface that is ∼ 1 nm wide, based on the phospholipid head density 21 . However, the interface forms instantaneous layers that can be revealed if, following Pandit et al. 24 , we consider the instantaneous local distance ξ, defined as the distance of each water molecule from the closest cell of a Voronoi tessellation centered on the phosphorous and nitrogen atoms of the phospholipid heads (Fig. 2) 20 . II. DYNAMICS Numerical simulations have shown that hydration water suffers a dramatic slow down not just in stacked phospholipids 15,16,[18][19][20][21][24][25][26][27][28] , but also in proteins and sugars [29][30][31][32][33] . Insights on the dynamical slow down can be obtained by inspecting the translational diffusion (D ) and rotational dynamics of hydration water molecules. The diffusion coefficient parallel to the surface of the membrane can be obtained from the linear regime reached by the mean squared displacement at sufficiently long times from the Einstein relation: (1) where r (t) is the projection of the center of mass of a water molecule on the plane of the membrane and the angular brackets ... indicate average over all water molecules and time origins. Using the DMPC as a model phospholipid membrane, Calero et al. 20 have found that water molecules are slowed down by an order of magnitude when the hydration level ω is reduced from 34 to 4 (Fig.3). This result is in qualitative agreement with experimental and other computational studies [11][12][13]18,19 . In particular, in conditions of very low hydration, the parallel diffusion is as low as 0.13 nm 2 /ns because water molecules interact with both the upper and the lower leaflet, hence remaining trapped. Increasing the level of hydration ω, Calero et. al 20 have shown that D increases monotonically. This observation suggests that, increasing the physical separation between the leaflets, the hydration water acts as a screen for the electrostatic interactions between water and the leaflets. The decreasing interaction of hydration water with the two leaflets can also be observed inspecting the rotational dynamics of water molecules via the rotational dipolar correlation function: whereμ(t) is the direction of the water dipole vector at time t and ... denotes the ensemble average over all water molecules and time origins. Such quantity is related to terahertz dielectric relaxation measurements used to probe the reorientation dynamics of water 13 . From Eq. 2 it is possible to define the relaxation time which is independent on the analytical form of the correlation function Cμ(t). As for D , the rotational dynamics Partition of membrane hydration water into fast (squares), irrotational (triangles) and bulk-like (circles) water molecules, following the assumption in Ref. 13 , as a function of the hydration level ω. As discussed in Ref. 20 , the assumption of the existence of fast water leads to inconsistencies. speeds up with the degree of hydration (Fig.3), confirming that the interactions between hydration water and the two leaflets modify the overall water dynamics 12,13,[18][19][20] . To account for the rapidly relaxing signals associated with the reorientation of water molecules in experiment 34 , Tielrooij et al. 13 assumed the existence of three water species near a membrane: (i) bulk-like, with characteristic rotational correlation times of a few picoseconds; (ii) fast, with rotational correlation times of a fraction of picosecond; and (iii) irrotational, with characteristic times much larger than 10 ps. Calero et al. 20 show that it is possible to analyze their simulations using this assumption (Fig. 4), however, the resulting fitting parameters for the correlation times are not showing any regular behavior as a function of ω, questioning the existence of fast water near a membrane. This possibility, on the other hand, cannot be ruled out completely, as it could be related to the presence of heterogeneities, such as those associated with water molecules with a single hydrogen bond to a lipid at low hydration 34 . Nevertheless, Calero et al. 20 have shown that a consistent explanation of the changes in the dynamics as a function of ω is reached by observing that, upon increasing the hydration level, water first fills completely the interior of the membrane and next accumulate in layers in the exterior region. The authors rationalized this observation observing that the inner-membrane (or interior) water has an extremely slow dynamics as a consequence of the robustness of water-lipid HBs. Moreover, the water-water HBs within the first hydration layer of the membrane slow down, with respect to bulk water, due to the reduction of hydrogen bond-switching at low hydration. As shown by Samatas et al. 22 , these effects are emphasized when the temperature decreases: water near Lines are guides for the eyes. Vertical dashed lines at ξ = 0 and 5Å mark the interfaces between the water within the interior of the membrane, the first hydration layer of water, and the water exterior to the membrane. The interface at ξ = 5Å separates bound water and unbound water. the membrane has a glassy-like behavior when T = 288.6 K, with the rotational correlation time of vicinal water, within 3Å from the membrane, comparable to that of bulk water ≈ 30 K colder, but with a much smaller stretched exponent, suggesting a larger heterogeneity of relaxation modes. Both the translational and rotational dynamics of water molecules are strongly determined by their local distance to the membrane. Calero and Franzese have recently shown 28 that the hydration water within the interior of the membrane is almost immobile, the first hydration layer, with ξ ≤ 5Å, is bound to the membrane, and the exterior water is unbound (Fig. 5). The authors have identified the existence of an interface between the bound and the unbound hydration water at which the dynamics undergoes an abrupt change: bound water rotates 63% less than bulk and diffuses 85% less than bulk, while unbound water only 20% and 17%, respectively. To rationalize the origin of the three dynamically different populations of water, (i) immobile within the membrane interior, (ii) bound in the first hydration layer, and (iii) unbound at the exterior of the membrane, Calero and Franzese have turned their attention to the investigation of the hydrogen bonds (HBs, Fig. 6). Based on the calculation of the average number of HBs n HB , they have found that the inner water is an essential component of the membrane that plays a structural role with HBs bridging between lipids, consistent with previ- ous results 35,36 . In particular, Calero and Franzese have found that, in the case of a fully hydrated membrane, ≈ 45% of the water-lipids HBs in the interior of the membrane are bridging between two lipids. The fraction of bridging HBs, with respect to the total number of water-lipids HBs, reduces to approximately 1/4 within the first hydration shell. Hence, also the bound water has a possible structural function for the membrane and, in this sense, can be considered as another constituent of the membrane that regulates its properties and contributes to its stability. Moreover, they found that unbound hydration water has no water-lipids HBs. However, even at hydration level as low as ω = 4, they find that ≈ 25% of inner water, and ≈ 18% in the first hydration shell, is unbound, i.e. has only water-water HBs. This could be the possible reason why it has been hypothesized the existence of fast water in weakly hydrated phospholipid bilayers in previous works 13 . Nevertheless, as already discussed, Calero and Franzese clearly showed that unbound water is definitely not fast, being at least one order of magnitude slower than bulk water. In order to further rationalize the interactions between hydration water and phospholipid heads, we computed 21 the correlation function where δ is the N-O vector or the P-HO vector. Interestingly, we have found that the P-HO vector has a longer lifetime compared to the N-O vector, indicating that the interactions between P and water hydrogen atoms are stronger than the interactions between N and O 21 . This conclusion is consistent with the observation that the P-HO two body pair correlation function is characterized by a first peak at a distance shorter than the N-O two body pair correlation function ( Fig. 7 upper panel). Starting from the observation that the N-O and the P-HO vectors have different lifetimes, we hypothesized that such difference can have an effect on the rotational dynamics of hydration water. In particular, we supposed that the rotations around the water dipole moment µ are different with respect to the rotations around −→ OH vector. In Ref. 21 , we computed Cμ and C−→ OH and we fit the two correlation functions with a double exponential, with characteristic times τ 1 and τ 2 , that intuitively reveals the effects of the electrostatic interactions on the slow relaxation. We calculated the relaxation times τ 1 and τ 2 in bins parallel to the membrane surface and centered at increasing distances from the membrane (Fig. 7, middle and lower panels). We found that the slow relaxation time, τ 1 , is orders of magnitude smaller than the very slow relaxation time, τ 2 . In particular, approaching the membrane, the −→ OH vector relaxes slower than to theμ vector. This is in agreement with the finding that the P-HO interaction is stronger than the N-O interaction. This result can be rationalized by observing that the lipids have different (delocalized) charges on the N-heads and on the P-functional groups and that these charges affect the rotation of water around the two vectors in different way. The slowing down of the rotational degrees of freedom (Fig. 7) decreases upon increasing the distance from the membrane surface. In particular, at distances of ∼ 1.3 nm from the membrane the relaxation times for thê µ vector and for the −→ OH vector become indistinguishable, as expected in bulk water. In view of the very high values of the relaxation times in the proximity of the membrane, we hypothesized that the electrostatic interactions with phospholipid heads might cause a slow down in the diffusivity of water molecules comparable -and hence measurablewith that of water at low temperatures 21 . To check our hypothesis, we measured the standard displacement of water molecules in terms of bond units (BU), defined as the distance traveled by water molecules normalized with respect to the oxygen-oxygen mean distance (which is a temperature-independent quantity), and we compared it with the same quantity for water at supercooled conditions. For a large enough simulated time, a standard displacement of < 1 BU would correspond to water molecules rattling in the cage formed by their nearest neighbors. This case would represent a liquid in which the translational degrees of freedom are frozen. We found that, in the proximity of the membrane surface, water molecules suffer from a dramatic slow down of ∼ 60% with respect to the value of bulk water at biological thermodynamic conditions. Moreover, upon increasing the distance from the lipid heads, we found that bulk diffusivity is recovered at ∼ 1 nm, the domain of definition of hydration water. Considering that the diffusivity of water close to the lipid heads is comparable with that of water at supercooled conditions, we concluded that such a slow-down could be interpreted effectively as a reduction of the thermal energy of water 21 . III. STRUCTURE As presented above, the dynamics of bulk water is recovered approximately at ∼ 1.3 nm away from a membrane. However, as we will discuss in the following, the structure analysis of hydration water 21 shows how longrange interactions spread at much larger distances, opening a completely new scenario for the understanding of water-membrane coupling. In particular, we analyzed 21 how the water intermediate range order (IRO) changes moving away from a membrane. Modifications in the connectivity of disordered materials induce effects that extend beyond the short range. This is, for example, the case for amorphous silicon and amorphous germanium 37 . Likewise, at specific thermodynamic conditions, water acquires structural properties that go beyond the tetrahedral short range and are comparable to that of amorphous silicon 38 . In Ref. 21 we adopted a sensitive local order metric (LOM) introduced by Martelli et al. 39 to characterize local order in condensed phase. The LOM provides a measure of how much a local neighborhood of a particle j (j = 1, . . . , N ) is far from the ground state. For each particle j, the LOM maximizes the spatial overlap be-tween the j local neighborhood, made of M neighbours i with coordinates P j i (i = 1, . . . , M ), and a reference structure -the ground state-with coordinates R j . The LOM is defined as: where (θ, φ, ψ) are the Euler angles for a given orientation of the reference structure R j , i P are the indices of the neighbours i under the permutation P, σ is a parameter representing the spread of the Gaussian domain. The parameter σ is chosen such that the tails of the Gaussian functions stretch to half of the O-O distance in the second coordination shell of j in the structure R j . As reference R j , we choose the ground state for water at ambient pressure, i.e. cubic ice. The site-average of Eq. (5), is by definition the score function and gives a global measure of the symmetry in the system with respect to the reference structure. The LOM and the score function has provided physical insights into a variety of systems 40-42 , hence they are particularly suitable also to characterize 21 and quantify 43 how far the membrane affects the water structural properties. We found 21 that the overall score function, Eq. (6), for water tends to increase at very short distances from the membrane and is comparable to bulk at 1.3 nm away from the membrane (Fig. 8 upper panel). The IRO enhancement is not dramatic, but can not be simply discarded. Hence, both the dynamics and the IRO are affected as far as ≈ 1.3 nm away from the membrane. Therefore, in Ref. 21 we proposed that the dynamical slow-down and the enhancement of the IRO are two effects related to each other. We suggested that the dynamical slowdown corresponds to an effective reduction of thermal noise that, ultimately, allows water molecules to adjust in slightly more ordered spatial configurations in the proximity of the membrane. Moving away from the membrane, at distances 1.3 nm, S C seems to reach a plateau, suggesting that a convergence to the bulk value should fall into the distance domain of hydration water. To check this, we computed the probability density distribution P (S C ) of Eq. (6) in the bin centered at δz = 2 nm away from the surfaces (z = 3.5 nm), and we compared it with the distribution of S C computed in a box of bulk water at the same thermodynamic conditions (Fig. 8 lower panel). Surprisingly, the two distributions do not overlap. This result indicates that the membrane perturbs the structure of water at the intermediate range of, at least, ∼ 1.6 nm, considering half bin-width. This distance is much larger than that defining hydration water. Upper panel: SC of water molecules belonging to a bin centered at distance z from the center of the lipid bilayer at 0 and with a bin-width of 1/10 of the entire system. Vertical dashed orange lines mark the region where SC approaches the value in bulk water. Lower panel: Water reaches the SC bulk value only at ≈ 2.8 nm away from the water-lipid interfaces, as shown by the difference ∆P (SC ) between the probability density distribution P (SC ) for bulk water and that at a specific distance δ from the membrane. Here we show ∆P (SC ) for δz = 2.0 nm (red line), with the bin centered at z = 3.5 nm, and for δz = 2.8 nm (green line), with the bin centered at z = 4.3 nm. We found 43 an overlap between the bulk-water distribution and that for the confined water only if between the two membrane leaflets there is enough water to reach distances as far as δz = 2.8 nm from the membrane. Such a remarkable result indicates that the membrane affects the structural properties of water at least as far as ∼ 2.4 nm, accounting for the ∼ 0.4 nm half bin-width. This distance can be considered twice the domain of definition of hydration water. Therefore, the definition of hydration water, as well as its role, should be extended to account for the repercussion of the membrane on the water structure. Or it should be revised, in order to further re-define its concept. In order to properly frame our observations into a consistent picture, in addition to our structural analysis of the membrane effects on the water-O positions, we have analyzed next the topology of the hydrogen bond network (HBN) which provides another measure of the IRO, but from the perspective of the HBs. IV. NETWORK TOPOLOGY The properties of network-forming materials are governed by the underlying network of bonds 44 . However, the topology of this network is very rarely investigated because of the difficulty of such analysis. A possible approach is through the ring statistics. It consists in defining, characterizing and counting the number of closed loops that are made of links (or bonds) between the vertices of the network. The ring statistics allows to study, in particular, the network topology of amorphous systems 45,46 , clathrate hydrates 47 , and chalgogenide glasses 48 . It is, also, an essential tool to characterize continuous random networks 37,[49][50][51][52][53] . After some hesitant debut in the field of water 54,55 , ring statistics has been embraced more and more as a tool to study water properties, starting from its application by Martelli et al. to characterize the transformations in the bulk water HBN near the liquid-liquid critical point 56 . Since then, ring statistics has been an essential tool for investigating the properties of water in its liquid phase 44,57,58 , as well as its amorphous states 39,40,44 , and for inspecting the dynamics of homogeneous nucleation [59][60][61] . Based on the idea that the connectivity in networkforming materials governs theirs properties, we explored how the topology of the HBN changes when water is confined between phospholipid membranes 43 . In fact, the HBN is what differentiates water from "simple" liquids 62 . In water the HBN is directional. Hence, there are several ways for defining and counting rings. Martelli et al. showed that each of these possibilities carries different, but complementary, physical meaning 44 . Here we use a definition for the HB that was initially introduced by Luzar and Chandler 63 and is common in the field. However, other definitions are possible, due to our limited understanding of the HBs. Nevertheless, it has been shown that all these definitions have a satisfactory qualitative agreement over a wide range of thermodynamic conditions 64,65 . In Fig. 9 we present three possible ways of defining rings in a directional network, as in the case of water. The first (Fig. 9 Top) explicitly looks for the shortest ring 66 starting from the molecule 1, when this molecule donates one HB, regardless whether other molecules in the ring accept or donate a bond. This definition emphasizes the intrinsic directional nature of the HBN. The second definition (Fig. 9 Center) considers only the shortest ring formed when molecule 1 can only accept a HB. The third definition (Fig. 9 Bottom), adopted by Martelli et al. 44 , ignores both the donor/acceptor nature of the starting molecule and the shortest-rings restriction, leading to a higher number of rings. The reader can refer to the original work 44 for further details about the definitions and their physical meaning in the case of bulk liquid and glassy water at several thermodynamic conditions. The authors of Ref. 43 computed the probability of having a n-folded ring, P (n), as a function of the distance FIG. 9. Schematic representation of three possible ways of defining the rings in the water directional network. In each case, we start counting from the water molecules labeled as 1, with O atoms in solid brown and H atoms in white, and we follow the directional HBs from H to O (arrows) along the HBN, until we return to molecule 1 or until we exceeds 12 steps. We consider only rings that cannot be decomposed into sub-rings. Top: A ring is formed only when molecule 1 donates HBs (brown arrow). In the example, the shortest ring is the hexagonal one (blue arrows). Center: A ring is formed when molecule 1 donates or accepts (brown arrows) HBs. In the example, the shortest ring is the pentagonal ring (arrows). Bottom: Any ring formed by molecule 1 is considered, starting from any of its HBs (brown arrows), without bond or ring's length constraints. In the example, there are a hexagonal and a pentagonal ring. Martelli et al. adopted the latter definition in Ref. 43 . z from the membrane. They found that near the membrane the P (n) is strikingly different from that of bulk water (Fig. 10, upper panel). In particular, the distribution is richer in hexagonal and shorter rings and is poorer in longer rings. This result points towards two main conclusions: (i) For membrane-hydration water, at a distance z ≤ 0.8 nm, the HBN tends to be preferentially ice-like, i.e., dominated by hexagonal rings. This observation is consistent with the results, discussed in the previous sections, showing that membrane-vicinal water is characterized by enhanced IRO and slower dynamics than bulk water. (ii) The reduced number of longer rings in the hydration water is consistent with the reduction of the overall dimensionality of the system due to the interface. The membrane fluctuating surface reduces the available space for the HBN in the first layer of hydration water. All the P (n) calculated at larger distances, z > 0.8 nm, are quite different from that for the hydration water and gradually converge towards a the bulk case upon increasing z. In particular, the probability of hexagonal rings decreases progressively, while longer rings become more and more frequent. This sudden change in P (n), between the first and the following bins, is consistent with the results, discussed in the previous sections, demonstrating the existence of a drastic change in structure and dynamics between bound water, in the first hydration layer, and unbound water, away from the membrane 28 . Here, the border between the two regions is increased from ∼ 0.5 nm 28 to ∼ 0.8 nm due to the membrane fluctuations, that are not filtered out in Ref. 43 , and to the spatial resolution, i.e., the binsize, of the analysis. The HBN of bulk water is finally recovered in the bin centered at z = 2.8 nm away from the membrane, i.e., for z ≥ 2.4 nm. Remarkably, this distance corresponds to the same at which water recovers the IRO of bulk water 21 , as discussed in the previous section. This important result indicates a clear connection between the structural properties of water molecules and the topology of the HBN, while further pointing toward the necessity of revising the concept of hydration water. The quality of the HBN, in terms of broken and intact HBs, is a tool of fundamental importance to fully cast the topology of the HBN in a consistent and complete physical framework. As a matter of fact, the presence of coordination defects affects the fluidity of water and is directly related to its capability of absorb long range density fluctuations 38 . Therefore, the authors in Ref. 43 complemented their investigation of the HBN topology with the analysis of its quality. They decomposed the HBs per water molecule into acceptor-(A) and donor-(D) types ( Fig. 10 lower panel). They label as A 2 D 2 a water molecule with perfect coordination, i.e., donating two bonds and accepting two bonds and as A x D y the others accepting x and donating y bonds. They focused their attention on the following coordination configurations: A 1 D 1 , A 2 D 1 , A 1 D 2 , A 2 D 2 and A 3 D 2 , as other configurations do not contribute significantly. First, they checked that in bulk water, at ambient conditions, the predominant configuration is A 2 D 2 . For the TIP3P model of water, this configuration accounts for ∼ 35% of the total composition. The second most dominant configuration in bulk is A 1 D 2 with ∼ 20%, followed by A 2 D 1 with ∼ 13%, A 1 D 1 with ∼ 12% and, finally, A 3 D 2 accounting for less then 10% (Fig. 10 lower panel). Such distribution qualitatively reflects the distribution in ab initio liquid water at the same thermodynamic conditions 67 . Hence, it suggests that classical potentials can carry proper physical information even in very complex systems such as biological interfaces. In the proximity of the membrane, the network of HBs largely deviates from that of bulk water, except for the under-coordinated configuration A 2 D 1 . In particular, the coordination defects A 1 D 1 and A 1 D 2 dominate the distribution, with ∼ 25% each, followed by the configurations A 2 D 1 and A 2 D 2 , with ∼ 15% each, and a minor percentage of higher coordination defects A 3 D 2 , with ∼ 3%. However, the small percentage of perfectly coordinated configurations, A 2 D 2 , near the membrane seems inconsistent with the higher local order observed at the same distance 21,43 , and with the enhanced hexagonal ringstatistics of the HBN 43 , already discussed. Such discrepancy is only apparent for the following two reasons. First, both the structural score function, S C , and the ring statistics are a measure of the IRO beyond the short range. On the contrary, the quality of the HBN, in terms of defects, is a measure only of the short range order. Second, the defects analysis includes only HBs between water molecules and do not account for the strong HBs between water molecules and the phospholipid head-groups. Instead, as discussed in the previous section 28 , ∼ 30% of the water molecules in the first hydration shell are bound to the membrane with at least one HB. Away from the membrane, upon increasing the distance, Martelli et al. 43 observed a progressive enhancement of perfectly tetra-coordinated configurations ( Fig. 10 lower panel). They found a progressive depletion of all coordination defects, up to recovering the bulkwater case at distance z ≥ 2.4 nm from the membrane, as for the probability distribution of S C and the HBN topology. The intriguing evidence that the under-coordinated defect A 2 D 1 remains almost constant at all distances is, for the moment, not explained. Indeed, it could be due to a variety of reasons, going from the presence of watermembrane HBs in the first hydration layer, to the propagation of defects in bulk, and it would require a detailed study. V. CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS The results summarized in this short review question our common understanding of hydration water near soft membranes, such as those in biological systems. This water layer, often called bio-water, is usually considered as ∼ 1 nm wide and is regarded as the amount of water that directly shape and define the biological activity in proteins, cells, DNA, etc. Such definition has been proposed based on results, both from experiments and computations, showing that the water dynamics and density are affected by the biological interface within ∼ 1 nm, while they recover the bulk behavior at larger distances. In our calculations based on well-established models of water nanoconfined between DMPC membranes, instead, we found new evidences that indicate the need for a revised definition of hydration water. We achieved this conclusion by focusing on physical quantities that have been not thoroughly, or not at all, considered before. In particular, by considering the instantaneous local distance of water from the membrane, Calero and Franzese were able to unveil the existence of a new interface between bound and unbound water ∼ 0.5 nm away from the membrane-water interface 28 . Bound water behaves like a structural component of the membrane and has a translational and rotational dynamics that is intermediate between water inside and outside the membrane 28 . Bound-water dynamics is dominated by the strong HB with the membrane and is orders of magnitude slower than the unbound water. The dynamics of bulk water is recovered only ∼ 1.3 nm away from the membrane. However, we showed that the membrane interface has an effect on the structure of the hydration water at a distance almost twice as large, up to, at least, ∼ 2.4 nm 21 . We got such a result by analyzing how the water structure, and its IRO, changes by moving away from the membrane. To this goal, we evaluated the score function, a structural observable that quantifies how close is the local structure to a reference configuration, in our case the cubic ice. Also in this case, we found that water ∼ 1.3 nm away from the membrane has a small but measurable IRO enhancement. Hence, within this range both the dynamics and the structure of hydration water undergo an effective reduction of the thermal noise, that we interpret as a consequence of the interaction with the membrane. Also, we have shown that different chemical species constituting the lipid heads interact with water molecules with different strengths, hence providing a rationale for the contributions to the observed dynamical slow-down in the proximity of the surface 21 . Furthermore, Martelli et al. 43 analyzed the IRO from the HB perspective by studying the HBN topology and its ring statistics. They found that water within ∼ 0.8 nm from the average position of the fluctuating membrane has an excess of hexagonal and shorter rings, and a lack of longer rings, with respect bulk water. Moreover, the defect analysis of the HBN showed that water in this ∼ 0.8 nm-wide layer has a lack of watertetra-coordinated molecules and an excess of water bicoordinated molecules. This result does not contradict the enhanced water IRO within the same layer, because the HBN defects analysis measures only the short range order and does not account for the water-membrane HBs. Martelli et al. 43 found also a sudden change in the HBN around 0.8 nm, with a ring statistics that approaches that of bulk. This result confirms the qualitative difference between bound and unbound water 28 . The analysis of the HBN ring statistics and the HBN defects show that the membrane interface generates a perturbations in the ring statistics that extends as far as, at least, ∼ 2.4 nm 43 . These observations, therefore, corroborate that the water structure is affected by the membrane interface up to a distance at least twice as large as that usually associated to the hydration water. All these findings should be taken into account when interpreting experimental results and when developing membrane-water interaction potentials. They can help in better understanding water in biological processes at large and, in particular, those phenomena where hydration plays a role. From a more general perspective, these calculations imply that the concept of hydration should be revised in order to account for the results presented here. Our conclusions entail further investigation about the relationship between diseases, possibly promoted by extracellular matrix variations, e.g., of hydration or ionic concentration, with the water HBN rearrangements. Example of such illness are cardiac disease and arterial hardening in healthy men 68 , or atherosclerosis and inflammatory signaling in endothelial cells 69 . Indeed, variations of ionic concentration drastically change the water HBN structure 70 and dynamics 71 , with an effect that is similar to an increase of pressure 72 . While dehydration has consequences on the dynamics and the structure of the water near a membrane that resemble those of a temperature decrease 28 . In particular, we foresee the extension of these calculations to out-of-equilibrium cases. Indeed, it has been recently shown that the potency of antimicrobial peptides may not be a purely intrinsic chemical property and, instead, depends on the mechanical state of the target membrane 73 , which varies at normal physiological conditions.
2021-01-18T02:15:46.769Z
2021-01-15T00:00:00.000
{ "year": 2021, "sha1": "0c097199d584b42f5ef270423540836ee4cdf0ef", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0c097199d584b42f5ef270423540836ee4cdf0ef", "s2fieldsofstudy": [ "Physics", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
3873185
pes2o/s2orc
v3-fos-license
Effectiveness of Acceptance and Commitment Therapy on Anxiety and Depression of Razi Psychiatric Center Staff AIM: Considering the key role of human resources as the main operator of organisations, the present research aimed to determine the effectiveness of acceptance and commitment therapy for anxiety and depression of Razi Psychiatric Center staff. MATERIALS AND METHODS: This research follows a quasi-experimental type with pre-test, post-test plans, and control group. Accordingly, 30 people were selected through volunteered sampling among Razi Psychiatric Center staff. Then, they were randomly placed into two groups of 15 (experimental and control) and evaluated using research tools. Research tools consisted of Beck Anxiety and Depression Inventories whose reliability and validity have been confirmed in several studies. Research data were analysed using the analysis of covariance (ANCOVA). Results: The statistical analysis confirmed the difference in the components of anxiety and depression in the experimental group, which had received acceptance and commitment therapy compared to the group that had not received any therapy in this regard (control group) (p < 0.05). CONCLUSION: Acceptance and commitment therapy reduces anxiety and depression. Introduction With the increasing complexity of modern societies, the mission of organisations and institutions to meet the expectations of society becomes more sensitive and more important. Therefore, it can be acknowledged that our world is the world of organisations. What is now believed by the experts and consensus is the fundamental role of human resources as the main operators of organizations. In other words, the human gives organisations the life. Undoubtedly, the efficient and motivated workforce can have the most effectiveness to grow, develop, and achieve the planned objectives [1]. Work is an important part of the life of an individual. On the one hand, it can satisfy some basic human needs such as the physical and mental growing, social communication, creating a sense of worth, confidence, and competence, but on the other hand, it can be a major source of stress [2]. Some events during working days are interpreted as an extent of the threat to the physical and psychological well -being. The events that are perceived as stressful factors follow negative emotional responses, particularly anger or anxiety. Thus, these excitements cause behavioural and physical stresses. These pressures also increase the blood pressure, heart rate, and stress hormone secretion such as adrenaline by psychological arousal. Physiological changes in the short term can lead to physical symptoms such as a headache or stomachache. Finally, the continuous high heart rate and blood pressure will also cause heart disease. People must think well to be able to work properly and must be healthy to think well. Therefore, physical and mental health can have a major impact on human resources productivity. Anxiety and depression are the important psychological issues that can cause physical and mental fatigue. Different interventions have been used to treat depression and anxiety. The point that should be considered is that interventions do not always have a positive effect and sometimes their effectiveness have been limited. Sometimes the interventions have been effective on some people and ineffective for others. Moreover, the new interventions regarding the occupation issues were less considered in Iran. One of the interventions used in the field of depression and anxiety burnout is acceptance and commitment therapy. This therapy is one of the third wave interventions, which is based on universal consciousness (mindfulness) [3]. In this approach, the universal consciousness is the conscious awareness to experience here and now, with openness, interest, and acceptance. Pervasive consciousness includes living here and now, busy with work in progress and not getting distracted by thoughts. Also, in pervasive consciousness, the person allows thoughts and feelings to come and go what they are without trying to control. When we observe private experience (thoughts and feelings) with openness and acceptance, even the most painful of them are less threatening, and they seem more tolerable [4]. In acceptance and commitment therapy, depression conceptualisation is emotions related to past events such as death or losing something, which prevents normal reactions and adaptation to stressful life events. In above approach, the content of depressed person negative thoughts is not considered. The tendency to behave based on the content of thoughts is called "cognitive fusion", in which it is tried to eliminate the causes of depression that may not be helpful. "Defusion" is against the cognitive fusion, which mediates the consequences of depression, in which the visitors learn to clear their thoughts and make their actions are based on values [5]. According to what was said this study aims to investigate the effectiveness of Acceptance and Commitment Therapy on Anxiety and Depression of Razi Psychiatric Center staff. Materials and Methods The present research is quasi-experimental, and the applied plan in the research is pretest-posttest plans with two groups. Pretest and posttest plan was composed of the control group from the experimental group and the control group. Both groups were measured twice. The first measurement was performed by a pretest before the intervention, and the second measurement was performed after the end of required interventions. Table 1 shows the content of ACT sessions. Therapeutic relationship, the people acquaintance with the matter of therapy sessions and treatment contract First Discovering and assessing inefficient strategies used in members to reduce anxiety and depression in different positions and evaluation of their effects, discussion of temporary and ineffective methods of using analogies, feedback and providing assignments Second Assisting people to accept painful personal events without conflict with them using analogies, feedback and providing assignments Third Explain to avoid painful experiences and knowledge of its consequences, training acceptance steps, changing language concepts using the of analogies, relaxation training, feedback and providing assignments Fourth The introduction of three -dimensional behavioural model to express the common communication behaviour/emotions, psychological and visible behavioural functions and discussion of trying to change behaviour based on them, feedback and providing assignments Fifth Explaining the concepts of roles and terms, viewing themselves as a context and contacting by analogies, understand the different sensory perceptions and mental separation, feedback and providing assignments Sixth Explaining the concept of values, creating motivation and empowering people for a better life, concentration exercises, feedback and providing assignments Seventh Training commitment to action, identifying behavioural patterns by values and commitment to act, summing up meetings, implementation after testing Eighth The statistical population, sample, and sampling The statistical population of the present research included all Razi psychiatric centre staff who have worked in 2015 -16. The sample was selected through the voluntarily sampling method and randomly divided into two groups, the experimental group, and the control group. The number of people in each of two groups was 15 people. The experimental group members participate in eight 90 minutes' sessions per week. No intervention was done in the control group. Inclusion criteria - The work experience of samples was considered five years and over in Razi psychiatric centre, with a Bachelor's degree level and above, in both sexes. - The rate of burnout was evaluated using the Maslach Burnout Inventory at medium to high level. - The sample must not have any history of mental illness. Beck Anxiety Inventory Beck Anxiety Inventory includes 21 items that there are four options for every phrase to answer. Each phrase reflects one of the symptoms of anxiety that usually people experience who are clinically anxious or are in a state of apprehension. The subjects sign their suffering from symptoms of anxiety last week in a column. Scoring includes not at all zero score, low one score, medium two scores, and severe three scores. The anxiety scores range is from zero to 63. Beck et al. (1988) expressed the reliability of the questionnaire in 1988 as much as 0.75 through retesting on 83 outpatients within a week. Federikh et al. (1992) reported an alpha coefficient as much as 0.94 for 40 outpatients. In a study on Iranian population, the Cronbach's alpha coefficient was 0.90 [6]. Also, the validity, reliability and internal consistency of the Beck Anxiety Inventory on Iranian population recorded as 0.72, 0.83, and 0.92, respectively [7]. Beck Depression Inventory The inventory was first developed by Beck et al. (1961) [8]. BDI-II is a 21 -item self -report inventory, which is the revised form of BDI. It is applied to determine the severity of depression and depressive symptoms in psychiatric patients and determining depression in the general population. The scores of the inventory are placed up to 3 based on four options (0 -3) for the absence of the specific indication to the highest degree of the sign in the scope. Beck el al. reviewed studies that had used this tool and found that its reliability coefficient using retesting varied from 0.48 to 0.86 according to the distance between the frequency and the running. Beck el al. (1996) once again obtained retest reliability coefficient within one week as much as 0.93. Several studies have been conducted in Iran to measure the psychometric properties of BDI -II which its reliability was 0.78 and its validity were varied from 0.70 to 0.90 [9][10]. Research method The anxiety and depression questionnaire was distributed among Razi psychiatric centre staff after preparation of research tools. People who have a moderate to high anxiety and depression were selected. Then, they completed Beck Depression and Anxiety Inventory. Among them, 30 people who have a moderate to high depression and anxiety were selected and randomly divided into two experimental and control groups of 15 people (This questionnaire was considered as a pretest for both groups). Those in the experimental group received Acceptance and Commitment therapy, but the control group did not receive this treatment. The method of treatment for the experimental group was eight 90 minutes' sessions per week. At the end of the weekly sessions (8 sessions), the questionnaires were given to the group again and anxiety and depression rate was recorded (post-test). Two months after the end of the session, a meeting was conducted on two groups, and their anxiety and depression were measured and recorded. The obtained scores by the pre-test and post-test scores as well as follow-up meeting scores to assess the effectiveness of the independent variables were analysed in the group. It is worth mentioning that intervention sessions are formed as a group that the summary of the content of each session is as follows: Results The descriptive findings of anxiety and depression are given in Table 2. Analysis of covariance was used to investigate this hypothesis according to the two-level class independent variable (experimental group and control group), the continuous dependent variable (anxiety and depression posttest scores) and independent variable (anxiety and depression pretest scores). Surveying data for the analysis of covariance showed that most of the assumptions are confirmed. Only the homogeneity of variances in some of the components was outside the criteria for which they have considered the alpha as much as 0.025. Covariance analysis results are reported below. According to the above table results, there is a significant difference in the experimental and control groups among anxiety (p = 0.000, F = 119.955). This table shows that there is a significant difference in posttest by removing the effect of pre-test scores among the adjusted average based on the group. In general, it can be said that acceptance and commitment therapy in post-test reduces anxiety. Given the size of this effect, the rate is significant. The follow -up results showed that treatment was stable by eliminating the effect of pretest (p = 0.000, F = 81.072). Therefore, it can be said that acceptance and commitment therapy significantly reduces anxiety in the long term. According to the results, it can be said that there is a significant difference in the experimental and control groups among depression (p = 0.000, F = 152.302). This table shows that there is a significant difference in post-test by removing the effect of pretest scores among the adjusted average based on the group. In general, it can be said that acceptance and commitment therapy in post-test reduces depression. Given the size of this effect, the rate is significant. The follow-up results showed that treatment was stable by eliminating the effect of pretest (p = 0.000, F = 30.413). Therefore, it can be said that acceptance and commitment therapy significantly reduces depression in the long term. Discussion In the current study, the effectiveness of Acceptance and Commitment Therapy on Anxiety and Depression was investigated and finding demonstrates that Acceptance and Commitment Therapy could reduce anxiety and depression. These results are consistent with findings of previous studies [16] [11]. Nariman et al. showed that Acceptance/ Commitment Training have a positive effect on decreasing the social anxiety in students with specific learning disorder (SLD) [11]. Hosseinaei et al. demonstrated Group acceptance and commitment therapy (ACT) -based training decreases job stress but has no considerable effect on job burnout [12]. In the study, Lang et al. evaluated the efficacy of Acceptance and commitment therapy (ACT) for emotional distress among veterans of the conflicts in Iraq and Afghanistan. They found improvement following treatment in the whole sample across a variety of measures, including general distress and functioning and moderate to high levels of satisfaction with treatment [13]. Acceptance and Commitment Therapy has several basic components that are emphasising them at the different steps makes individuals accepting their problems and perceiving less anxiety and stress, which improves the health. This method is the limit range of mental flexibility, i.e. creating the ability for a practical choice among the various options that are more relevant rather than a practice, which is merely imposed to avoid thoughts, feelings, and disturbing memories [17]. In this therapy, it is initially tried to increase the subjects' psychological acceptance of subjective experiences (thoughts, feelings) and to reduce ineffective control practices mutually. The patient is taught that any action to prevent or control these unwanted mental experiences are ineffective or inversed, which exacerbate them. The experience should be completely accepted without any internal or external reaction to remove them. The mental experience in patients includes things such as emotional ambivalence, frustration, chronic sadness, loneliness, loss of hope and a sense of continuity of generations, embarrassment, shame, guilt, and anger. Therefore, participants in the first step learned to accept the feelings without a reaction at first. In the second step, the psychological knowledge of subjects is added. This means that the individuals are aware of all mental states, thoughts, and behaviour in the present moment. In the third stage, the individuals are taught to separate themselves from the subjective experiences (cognitive isolation) so that they can act independently of the experience. Fourth are the efforts to reduce the excessive focus on visualisation or personal story (as victims) that the individuals have made for themselves. Fifth is helping the individuals to understand their basic personal values and identify them to convert to specific behavioural goals (to clarify the values). Finally, motivating them to act responsibly towards the goals and values of the activities identified with the adoption of mental experiences. Finally, motivating them to act responsibly towards the goals and values of the activities identified with the adoption of mental experiences. These thoughts can be subjective experiences related to events (trauma), or social anxiety and concerns. Thus, in the final stage, it is observed that participants accept their subjective experiences and they can act responsibly. The first direct consequence of accepting the feelings and emotions is reducing the negative thoughts, and responsible behaviour leads to an effective action instead of an anxious reaction. https://www.id-press.eu/mjms/index To explain depression, it can be said that Acceptance and Commitment Therapy according to theorists is an important factor in creating and maintaining psychological trauma and increasing depression is an experimental avoiding. This means the exaggerated negative assessment of internal experiences (such as thoughts, feelings, and emotions) and the lack of willingness to experience, which leads to attempts to control or escape from them and can intervene in individual performance [17]. People who are more experimental avoiding experience, positive emotional experiences and lower mental health and feel that their lives are meaningless. However, the purpose of acceptance and commitment therapy is reducing the experimental avoiding and increasing psychological flexibility through accepting the inevitable distressing and unpleasant feelings, mindfulness cultivating to neutralise the excessive involvement by recognising and identifying personal values related to behavioural goals. Participants are encouraged to communicate with their experiences fully and without resistance while in motion toward worthy goals and accept them without judging their truth or falsity when the emergence. This increases motivation to change despite obstacles and encourages individuals to achieve worthy goals in life. This will lead to depression improvement, especially in the field of psychology. Psychological flexibility and acceptance could improve the health status of people in different fields and help them to promote meaningful aspects of life and to increase valuable activities to help to improve the depression. Acceptance of thoughts as thoughts, feelings as feelings, and emotions as emotions (as they are, no more and no less), leads to weakening the cognitive fusions. Also, the adoption of internal events, when the individuals conflict with depressions and disturbances, allow them to develop their behavioural coffers. They can spend the obtained time to do targeted and valuable activities. Thus, depression is improved in this way. This study has some limitations that restricted its application and generalisations. First, these findings are based on self -reporting of people. Secondly, the lack of large sample size of the population could be mention as other potential limitation. In the end, further studies with larger sample sizes are suggested to clarify the results of this study, additionally analyses the pattern of all aspects of the Acceptance and Commitment Therapy on anxiety and depression is recommended. It can be said that Acceptance and Commitment Therapy could reduce anxiety and depression.
2018-04-03T00:11:22.828Z
2018-02-08T00:00:00.000
{ "year": 2018, "sha1": "b5a883fe8547979f57984feffb219119a0834714", "oa_license": null, "oa_url": "https://id-press.eu/mjms/article/download/oamjms.2018.064/1929", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5a883fe8547979f57984feffb219119a0834714", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257833883
pes2o/s2orc
v3-fos-license
The impact of training dataset size and ensemble inference strategies on head and neck auto-segmentation Convolutional neural networks (CNNs) are increasingly being used to automate segmentation of organs-at-risk in radiotherapy. Since large sets of highly curated data are scarce, we investigated how much data is required to train accurate and robust head and neck auto-segmentation models. For this, an established 3D CNN was trained from scratch with different sized datasets (25-1000 scans) to segment the brainstem, parotid glands and spinal cord in CTs. Additionally, we evaluated multiple ensemble techniques to improve the performance of these models. The segmentations improved with training set size up to 250 scans and the ensemble methods significantly improved performance for all organs. The impact of the ensemble methods was most notable in the smallest datasets, demonstrating their potential for use in cases where large training datasets are difficult to obtain. INTRODUCTION Half of people will get cancer in their lifetime and many of these will receive radiotherapy in their treatment [1]. Radiotherapy patients receive a computed tomography (CT) scan which is used to plan their treatment. Accurate 3D segmentation of healthy organs close to the tumour (also known as organs-at-risk, OARs) in the CT is critical to plan the best treatment, focusing radiation onto the tumour (the target) and sparing OARs. Manual segmentation of OARs by clinicians is slow and prone to variability [2], so convolutional neural networks (CNNs) are now increasingly being used to automatically generate segmentations. It is generally well known that deep learning models trained on larger datasets will generalise better to unseen data. However, in radiotherapy, large sets of high-quality training data are scarce and annotating large 3D CT scans is time-consuming. Therefore, it is important to understand how many examples are required to train a CNN model for 3D OAR auto-segmentation that is accurate and robust. In Accepted in 20th IEEE International Symposium on Biomedical Imaging (ISBI 2023) this study, we evaluated this for head and neck (HN) autosegmentation. We additionally evaluated several ensemble techniques, combining the predictions from multiple trained models, which could boost segmentation performance. MATERIAL AND METHODS For this study we gathered, from a single institution, 1215 planning CT scans with clinical OAR segmentations for the brainstem, parotid glands and the cervical section of the spinal cord. We reserved 215 scans as an unseen test set which left 1000 scans for model training. We trained an established HN auto-segmentation CNN [3] to segment these OARs from scratch with random subsets of either 25, 50, 100, 250, 500, 800 and 1000 images. A 5-fold cross-validation was performed on each subset, splitting the data from each to produce 5 unique parameter sets (or models), Fig 1. The models produced from each dataset size were evaluated on the unseen test set using 2 inference strategies. Inference strategies First, the best model (BM) from each 5-fold cross-validation was selected based on the segmentation performance in the internal test set for each fold (shown hatched in Fig 1). Models were ranked based on the median values of the bidirectional mean distance-to-agreement (mDTA) and 95th percentile Hausdorff distance (HD95) metrics. Next, we tested four different ensembling techniques to combine the predictions of all 5 models at each dataset size: 1. Summation of logits -the raw predicted probabilities from each model were summed prior to taking the argmax to generate a segmentation mask. 2. Summation of Softmax -the probabilities from each model were passed through a Softmax activation function prior to being summed and then argmax applied to generate a segmentation mask. 3. Majority vote -argmax was applied directly to generate a predicted segmentation mask for each model and then the most popular class for each voxel was computed. 4. STAPLE -argmax was applied to generate a segmentation mask for each model and then the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm [4] was used to generate a consensus segmentation mask. . The remaining 1000 are subdivided into multiple training dataset sizes ranging from 25 to 1000 scans. A 5-fold cross-validation is completed for each dataset size. The internal test sets (hatched) were used to determine the best-performing cross-validation model. Both BM and each of the ensemble inference strategies were tested in the unseen testing pool of 215 scans, and the segmentation quality again assessed using mDTA and HD95. These two metrics are complementary as mDTA evaluates the overall quality of the segmentation and HD95 highlights large errors. We used Wilcoxon signed-rank tests to compare each of the ensemble approaches to the BM inference to determine which had superior performance. We ranked all the ensemble strategies based on the level of improvement using a simple points-based system, allocating a number of points for the level of significance of the improvement suggested by the Wilcoxon signed-rank test. We gave points as follows: 5 for p < 0.000005; 4 for p < 0.00005; 3 for p < 0.0005; 2 for p < 0.005; 1 for p < 0.05; 0 for p > 0.05 (not significant). We report the total number of points for each ensemble method and metric, across all OARs and dataset sizes. While rudimentary, this ranking system summarizes which ensemble techniques perform best. Implementation details The CNN model used in this study was a 3D UNet with residual connections. Full implementation details can be found in [3] and identical training protocols were used for each model. All CNN models were implemented in PyTorch 1.10.1 and training was performed using a single NVidia GeForce RTX 3090 GPU with 24GB of memory. Figure 2 shows the mDTA results of models for every dataset size. The results for different inference styles are shown in neighbouring boxes. The solid-coloured boxes show results of BM while the hatched boxes are the results with different ensemble techniques in the order described in section 2.1. As expected, the auto-segmentation performance steadily improves with dataset size. Above 250 scans we observed very little improvement in segmentation performance. RESULTS Each of the ensemble techniques improved the segmentation performance when compared to inference with the BM alone, and ensemble techniques 1-3 are significantly better for all dataset sizes for the mDTA and HD95 metric. The use of the STAPLE algorithm results in worse performance compared to the other ensemble techniques for the mDTA metric, occasionally performing slightly worse than the single best model inference (e.g. for 250 & 800 images). DISCUSSION In this study we have explored the question of how much data is needed to successfully train a CNN for 3D CT autosegmentation. For our case, segmenting HN OARs, we observed that improvements in performance were negligible when training with datasets larger than 250 clinical samples. Additionally, we tested different ensemble inference strategies, including STAPLE, a common algorithm used in other applications (e.g., atlas-based segmentation, [5]). We found that the ensemble strategies of either summing up the Softmax maps or using majority vote performed significantly better. Moreover, the segmentation performance boost of the ensemble techniques were most significant as the number of training examples decreased, showing the most dramatic improvement for the smallest datasets. Fang et al. similarly explored the impact of training sample size on HN deep-learning auto-segmentation models [6]. However, their segmentation results were based solely on the Dice similarity coefficient which, whilst popular, is volume biased, insensitive to fine details and can hide clinically relevant differences between structure boundaries [7]. Their study also used a 2D model which may have different performance characteristics with dataset size than our 3D model. Ren et al. used ensembling to combine predictions from multiple imaging modalities (CT, MRI and PET) for tumour segmentation [8]. They used an average of Softmax probabilities, similar to our best performing ensemble approach. Whilst we have shown that segmentation performance scales with dataset size and can be improved with ensemble techniques, it is evident this is not the entire story. The CNN architecture used in this study was originally designed for limited data and demonstrated good performance when trained with just 34 CTs with highly-consistent, arbitrated gold-standard segmentations [3,9]. However, in our study, the models were trained on the unedited clinical contours which are susceptible to observer variation. Therefore, it is apparent there are further gains in performance to be attained by cultivating a highly-consistent training dataset. We have shown that ensemble techniques can be used to improve the performance of auto-segmentation models, with an effect that is most noticeable for models trained with smaller datasets. Ensemble methods could be particularly effective in scenarios where data is scarce, for example in cases of rare anatomies or for structures that are difficult to segment. In such cases, ensemble methods offer improved robustness and accuracy of automatic segmentations despite a limited number of training examples. CONCLUSION We have demonstrated that ensemble techniques significantly improve auto-segmentation performance of healthy organs in the head and neck region. Furthermore, ensemble inference strategies are an effective technique to improve segmentations, especially in cases where larger validated training datasets are difficult to obtain. COMPLIANCE WITH ETHICAL STANDARDS This project used retrospective and anonymised patient images, and was approved by the UK Computer Aided Theragnostics Research Database Management Committee (North
2023-03-31T01:15:46.687Z
2023-03-30T00:00:00.000
{ "year": 2023, "sha1": "d747c48d3f7ac15e7776f0061e8f5e99bd09cb73", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d747c48d3f7ac15e7776f0061e8f5e99bd09cb73", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
239169680
pes2o/s2orc
v3-fos-license
Estrogen inhibits renal Na-Pi Co-transporters and improves klotho deficiency-induced acute heart failure Objective and hypothesis Klotho is an aging-suppressor gene. Mutation of Klotho gene causes hyperphosphatemia and acute heart failure. However, the relationship of hyperphosphatemia and acute heart failure is unclear. We hypothesize that hyperphosphatemia mediates Klotho deficiency-induced acute heart failure and further that therapeutic reduction of hyperphosphatemia prevents acute heart failure in Klotho mutant (KL(−/−)) mice. Methods and results A significant elevation of serum phosphorus levels and a large reduction of heart function were found in KL(−/−) mice by six weeks of age. Normalization of serum phosphorus levels by low phosphate diet (LPD) rescued Klotho deficiency-induced heart failure and extended lifespan in male mice. Klotho deficiency impaired cardiac mitochondrial respiratory enzyme function and increased superoxide production, oxidative stress, and cardiac cell apoptosis in male KL(−/−) mice which can be eliminated by LPD. LPD, however, did not rescue hyperphosphatemia or heart failure in female KL(−/−) mice. LPD did not affect estrogen depletion in female KL(−/−) mice. Normalization of serum estrogen levels by treatment with 17β-estradiol prevented hyperphosphatemia and heart failure in female KL(−/−) mice. Mechanistically, treatment with 17β-estradiol rescued hyperphosphatemia via inhibiting renal Na-Pi co-transporter expression. Normalization of serum phosphorus levels by treatment with 17β-estradiol also abolished cardiac mitochondrial respiratory enzyme dysfunction, ROS overproduction, oxidative stress and cardiac cell apoptosis in female KL(−/−) mice. Conclusion Klotho deficiency causes acute heart failure via hyperphosphatemia in male mice which can be prevented by LPD. 17β-estradiol prevents Klotho deficiency-induced hyperphosphatemia and heart failure by eliminating upregulation of renal Na-Pi co-transporter expression in female mice. Introduction Chronic kidney disease (CKD) affects approximately 10% of the general population [1]. Cardiovascular disease, occurring in up to 95% patients with CKD (also known as uremic cardiomyopathy), is the major cause of mortality for patients with CKD [2]. Causes for uremic cardiomyopathy include traditional risk factors, such as hypertension, diabetes, hyperlipidemia, and CKD-specific factors that remain poorly defined [3]. Among these factors, phosphate retention has recently received much attention [3,4]. Hyperphosphatemia (an abnormally elevated level of phosphate in the blood) can result from decreased phosphate excretion from kidneys. Hyperphosphatemia leads to pathophysiological changes which contributes to the high rates of mortality observed in CKD [2]. Hyperphosphatemia plays a central role in the development of a variety of serious clinical consequences, including renal osteodystrophy, cardiovascular and soft tissue calcification and cardiac death [5]. However, the underlying mechanism of hyperphosphatemia-induced cardiomyopathy is poorly understood. Klotho is an anti-aging gene that is primarily expressed in renal tubular epithelial cells of the kidneys and choroid plexus of the brain [6]. Klotho has multiple actions. Transmembrane Klotho serves as a co-receptor for fibroblast growth factor-23 (FGF23) to regulate phosphate balance [7][8][9]. The extracellular domain of Klotho can be shed into the circulation as soluble Klotho which exerts systemic effects [9][10][11]. The mouse and human Klotho genes encode a short-form, secreted Klotho protein that is generated due to alternative mRNA splicing [9,12]. In the kidney, Klotho promotes urinary excretion of phosphorus and maintains phosphate homeostasis [9,13]. Klotho gene mutation impairs phosphorous excretion leading to hyperphosphatemia [9]. Thus, it is important to investigate whether hyperphosphatemia is involved in the pathogenesis of Klotho deficiency-induced acute heart failure. We hypothesized that a reduction of hyperphosphatemia by low phosphate diet may prevent the cardiac remodeling and heart failure in Klotho-deficient mice. The women population has a lower risk of heart disease [14]. However, the sex difference in the risk of heart disease disappears after age 65 14 . A decline in the natural hormone estrogen may be a contributing factor for heart disease among post-menopausal women. Therefore, we will investigate sex difference of Klotho deficiency-induced heart failure and the underlying mechanism. We hypothesized that estrogen treatment may improve Klotho deficiency-induced heart failure in female mice. Methods Expanded methods can be found in the Online Supplemental Methods and Data. Animal study protocols Klotho-hypomorphic mutant mice (KL(− /− )) (kindly provided by Dr Kuro-o) were backcrossed to 129/SvJ mice for more than nine generations to achieve congenic background [6]. For dietary phosphate restriction (LPD), mice were fed with a purified low inorganic phosphate diet containing with 0.2% (wt/wt) (TD-09073, Harlan Teklad, Madison, WI) from weaning at 3 weeks of age. Normal phosphate diets contain 0.35% inorganic phosphate. Both male and female mice were used. For the estrogen study, 17β-estradiol with biodegradable carrier-binder (0.25mg/pellet, 90-day release) was implanted subcutaneously in KL (− /− ) female mice fed with low-phosphate diet at age 7 weeks. The animal protocol was approved by the Institutional Animal Care and Use Committee (IACUC) of University of Oklahoma Health Sciences Center. Cardiac Magnetic Resonance Imaging (MRI) was performed as described recently [15,16]. For details, refer to online supplemental methods. Apoptosis assays Apoptotic cells were detected as described previously [29,30]. For details, refer to online supplemental methods. Statistical analysis Quantitative data were presented as the Means ± SEM. Differences between experimental groups were examined by one-way analysis of variance (ANOVA) followed by the Tukey post-test or two-way ANOVA followed by the Bonferroni post-test using Prism software (GraphPad). The unpaired t-test was used for comparisons between two group. For all analysis, p < 0.05 was considered statistically significant. Results The data that support the findings of this study are available from the corresponding author upon reasonable request. Klotho gene mutation caused acute heart failure Heart function of WT and KL(− /− ) mice was measured using magnetic resonance imaging (MRI). Four male and four female mice were used in each strain. Fractional shortening, ejection fraction, and stroke volume decreased significantly in KL(− /− ) mice at age 6 weeks (Figs. S1A-C), indicating that Klotho deficiency impairs heart function leading to acute heart failure. Serum phosphorus levels were notably increased in KL(− /− ) mice (Fig. S1D), indicating hyperphosphatemia. It is not clear, however, whether hyperphosphatemia is involved in Klotho deficiency-induced acute heart failure. Low-phosphate diet prevented acute heart failure in male KL(− /− ) mice but not female KL(− /− ) mice We then investigated whether dietary phosphate restriction can protect acute heart failure in KL(− /− ) mice. After weaning, KL(− /− ) mice were fed with low-phosphate diet (LPD, containing with 0.2% inorganic phosphate) or normal phosphate diet (ND, containing with 0.35% inorganic phosphate). Cardaic function was measured by MRI in 13 week-old KL(− /− ) mice on LPD and 6-week-old KL(− /− ) mice on ND. KL(− /− ) mice on ND have a short lifespan and die by age 8 weeks (Fig. S4). Fractional shortening, ejection fraction, stroke volume, and cardiac output were decreased significantly in male KL(− /− ) mice fed with ND ( Fig. 1A-D). Interestingly, LPD prevented Klotho deficiencyinduced impairment in heart function or heart failure in male KL (− /− ) mice. LPD largely increased the survival rate and extended lifespan in male KL(− /− ) mice (Fig. S4). However, LPD did not rescue heart failure in female KL(− /− ) mice ( Fig. 1A-D). Mice were euthanized 1 week after heart function measurement, heart gravimetric data were collected and tissues were harvested. The heart weight to body weight ratio and the left ventricular myocardial mass to body weight ratio were significantly increased in KL(− /− ) mice on ND compared to WT mice Mitochondrial dysfunction and cell apoptosis are important mechanisms in the development of myocardial remodeling and heart failure. So, we measured cardiac mitochondrial enzyme activity and cardiac cell apoptosis. Mitochondrial complex I and complex IV enzyme activity were significantly decreased in KL(− /− ) mice fed with ND, while LPD prevented the downregulation of mitochondrial enzyme activities in male KL(− /− ) mice but not female KL(− /− ) mice (Figs. S2 and S3). Cardiac cell apoptosis was dramatically increased in KL(− /− ) mice fed with ND as evidenced by increased TUNEL staining ( Fig. 2A and B), DNA laddering ( Fig. 2C) and cleaved caspase-3 expression (Fig. 2D). LPD prevented the increase in cardiac cell apoptosis in male KL(− /− ) mice but not female KL(− /− ) mice ( Fig. 2A-D). Surprisingly, dietary phosphate restriction decreased serum phosphorus levels only in male KL(− /− ) mice but not in female KL(− /− ) mice (Fig. 3A). This finding explains why LPD prevented cardiomyopathy in male KL(− /− ) mice but not female KL(− /− ) mice. Serum calcium levels were slightly increased in both male and female KL(− /− ) mice (Fig. S8). LPD slightly decreased the serum calcium levels in KL (− /− ) mice. Estrogen regulates serum phosphorus levels [31]. Thus, we measured serum estrogen levels. In WT mice, serum estrogen levels were much higher in female versus male mice in both normal diet and LPD conditions (Fig. 3B). Serum estrogen levels did not change in male KL 17β-estradiol prevented klotho deficiency-induced cardiac remodeling and heart failure in female KL(− /− ) mice To explore the influence of estrogen on phosphate metabolism and cardiomyopathy, we treated female KL(− /− ) mice with 17β-estradiol. Briefly, 17β-estradiol with biodegradable carrier-binders (0.25mg/pellet, 90-day release) were implanted subcutaneously in KL(− /− ) female mice fed with LPD at the age of 7 weeks. Heart function was measured using MRI at the age of 13-weeks old. Fractional shortening, ejection fraction, stroke volume, and cardiac output were significantly decreased in female KL(− /− ) mice ( Fig. 4A-D). Treatment with 17β-estradiol abolished the decreases in these parameters (Fig. 4A-D), indicating that 17β-estradiol rescued left ventricular dysfunction in female KL(− /− ) mice. Treatment with 17β-estradiol largely improved the body weight drop and increased the survival rate in female KL(− /− ) mice (Figs. S5A-C). Female mice were euthanized before dying or at the age of 20 weeks, and tissues were collected. The heart weight to body weight ratio was significantly increased in KL(− /− ) mice compared to WT mice (Fig. 4E). The left ventricular myocardial mass to body weight ratio was also significantly increased in female KL(− /− ) mice (Fig. 4F). Cardiac hypertrophy was also manifested by increased expression of ANP and BNP in the heart in female KL(− /− ) mice (Figs. 4G and 1H). Treatment with 17β-estradiol attenuated cardiac hypertrophy in female KL(− /− ) mice ( Fig. 4E-H). Cardiac hypertrophy was characterized by cardiac remodeling with extensive fibrosis ( Figure S6) and dystrophic calcification ( Figure S7A). Treatment with 17β-estradiol attenuated cardiac fibrosis and calcification ( Figure S6 and S7A). Klotho deficiency-induced cardiac calcification was associated with upregulation of runt-related transcription factor 2 (Runx2) (Fig. S7B.), a key transcription factor associated with osteoblast differentiation. Overall, the data suggest that estrogen depletion plays a critical role in Klotho deficiency-induced heart failure and cardiac remodeling in female mice which can be rescued by supplement with exogenous estrogen. Normalization of serum phosphorus levels by 17β-estradiol treatment attenuated cardiac oxidative stress, mitochondrial dysfunction and cardiac cell apoptosis in female KL(− /− ) mice. We next assessed cardiac oxidative stress, mitochondrial function and cell apoptosis. To evaluate cardiac reactive oxygen species (ROS) levels and oxidative stress-associated damages, we used immunostaining and western blot analysis of 4-HNE, a product of lipid peroxidation, and DHE fluorescence. Intracellular superoxide converts DHE to ethidium which binds to double-stranded DNA resulting in nuclear red fluorescence. Cardiac protein oxidation was also assessed by measuring carbonyl groups, a hallmark of ROS-modified proteins. As shown in Fig. 6A, B and C, the levels of 4-HNE and DHE were significantly increased in female KL(− /− ) mice, which were effectively attenuated by 17β-estradiol treatment. Exogenous 17β-estradiol also reduced the formation of carbonyl groups (Fig. 6D). Therefore, 17β-estradiol attenuated Klotho deficiency-induced oxidative stress. Since 4-HNE is a protein modification, any proteins that contain this modification can react with 4-HNE antibody. Thus, multiple bands are expected in western blot corresponding to the molecular weights of the modified proteins (Fig. 6C). Similarly, multiple bands are also expected in western blot analysis of oxidative proteins that contain carbonyl groups (Fig. 6D). Cardiac ATP content was significantly decreased in female KL(− /− ) mice, while 17β-estradiol treatment nearly rescued Klotho deficiencyinduced ATP depletion (Fig. 7A). The activities of complex I and complex IV were reduced in mitochondria isolated from cardiomyocytes in KL(− /− ) mice, and 17β-estradiol treatment prevented Klotho deficiency-induced downregulation of these mitochondrial enzyme activities ( Fig. 7B and C). Measurement of the amount of cytochrome c leaking from mitochondria to cytosol is a sensitive method for monitoring the degree of apoptosis [32]. Cytochrome c was decreased in mitochondrial fraction but increased in cytosolic fraction in the heart of KL(− /− ) mice which were rescued by 17β-estradiol (Fig. 7D-F). These results suggest that 17β-estradiol prevents cardiac mitochondrial dysfunction in KL(− /− ) mice. Consistent with the increased oxidative stress and mitochondrial dysfunction, cardiac cell apoptosis was dramatically increased in female KL(− /− ) mice as evidenced by increased TUNEL staining (Fig. 8A and B), and cleaved caspase-3 expression (Fig. 8C). Treatment with 17βestradiol prevented Klotho deficiency-induced increases in cardiac cell apoptosis ( Fig. 8A-C). Discussion Klotho is an aging-suppressor gene [6,9,29,33,34]. Here we provide the first evidence that mutation of mouse Klotho gene caused acute heart failure by age 6 weeks without any external stress (Fig. S1). Klotho gene is primarily expressed in kidney tubule epithelial cells and serves as a coreceptor of FGF23 to inhibit NaPi co-transporters promoting phosphate excretion [9]. Mutation of Klotho gene results in upregulation of NaPi cotransporter expression which increases phosphorus reabsorption leading to significant elevation of serum phosphorus levels or hyperphosphatemia (Fig. 3, Fig. S1) [9]. Interestingly, Klotho deficiency-induced acute heart failure is likely due to hyperphosphatemia because normalization of serum phosphorus levels by low phosphate diet (LPD) effectively prevented impairment in cardiac function in male Klotho mutant mice (Fig. 1). LPD largely increased the survival rate and extended lifespan in Klotho mutant mice (Fig. S4). To the best of our knowledge, this is the first study showing an important role of phosphate retention in heart failure. It should be mentioned that LPD can maintain heart function in a normal range in male KL(− /− ) mice until 10 months of age [15]. After this age, cardiac function declines in KL(− /− ) mice despite treatment with LPD, indicating that Klotho deficiency itself also leads to chronic heart failure [15]. Recent studies showed that Klotho deficiency-induced chronic heart failure is likely due to impairment of the Nrf2-GR pathway and disruption of TRPC6 channels in cardiomyocytes as the direct results of downregulation of serum Klotho levels [15,35]. Klotho gene is not expressed in cardiomyocytes in mice [15]. External stress aggravates Klotho deficiency-induced chronic heart failure in KL(− /− ) mice treated with LPD 36,31. On the other hand, soluble Klotho protects the heart against stress-induced cardiac hypertrophy and remodeling in Klotho-deficient mice [15,[35][36][37]. Another interesting finding of this study is that LPD decreased serum phosphorus levels in male KL(− /− ) mice but not female KL(− /− ) mice (Fig. 3). This phenomenon accounts for the finding that LPD rescued Klotho deficiency-induced heart failure in male but not female mice (Fig. 1). Regulation of phosphorus excretion by the kidney is the key mechanism for maintaining normal phosphate balance. Protein expression of NaPi co-transporter 2a and 2c, the important transporters for phosphorus reabsorption in kidneys, were markedly upregulated in KL (− /− ) mice (Fig. 3). LPD decreased NaPi co-transporter 2a and 2c protein expression in male but not female KL(− /− ) mice (Fig. 3). The NaPi cotransport system includes type IIa and type IIc NaPi cotransporters, which are localized in the apical membrane of the proximal tubular cells [38]. The type IIa NaPi co-transporter is the major determinant of serum Pi levels and urinary Pi excretion. Klotho deficiency depleted estrogen as evidenced by a significant decrease in serum estrogen levels in female KL(− /− ) mice (Fig. 3). This finding is supported by a report by Toyama R et al. [39] Klotho involves in the regulatory control of pituitary hormone, such as FSH and GnRH [9]. Klotho deficiency impairs regulation of gonadotropins leading to atrophy of the female reproductive system, and hence downregulation of estrogen levels in Klotho-deficient mice [39]. LPD did not affect serum estrogen levels in female KL(− /− ) mice. Thus, we treated female KL (− /− ) mice with 17β-estradiol, a potent and prevalent endogenous estrogen. Interestingly, 17β-estradiol treatment abolished upregulation of renal NaPi co-transporter expression and elevation of serum phosphorus levels (Fig. 5), prevented cardiac remodeling and heart failure (Fig. 4), and extended lifespan (Fig. S5) in female KL(− /− ) mice. These data provide the first evidence that estrogen deficiency may contribute to Klotho deficiency-induced hyperphosphatemia and heart failure in female mice. Burris et al. reported that estrogen induces phosphaturia by directly and specifically targeting NaPi-IIa in the proximal tubular cells [40]. This effect is mediated via a mechanism involving coactivation of both estrogen receptor isoforms α and β, which likely form a functional heterodimer complex in the kidney proximal tubule [40]. Webster et al. also reported that estrogen downregulates of NaPi-IIa and NaPi-IIc proteins in the proximal tubule through activation of estrogen receptor isoform α [41]. Our data suggest that Klotho controls phosphate levels via estrogen in female mice (Figs. 5 and 9). Despite the benefits of estrogen, the American Heart Association recommends against using hormone therapy to reduce the risk of coronary heart disease or stroke in postmenopausal women because there is insufficient evidence supporting beneficial effects of the estrogen therapy. In this study, we found that 17β-estradiol effectively prevented Klotho deficiency-induced heart failure via maintaining normal phosphate balance. This finding may provide a new insight into the clinical use of estrogen for treating heart diseases associated with Klotho deficiency in CKD patients. Estrogen is used due to its protective effects on cardiovascular health and disease. Klotho-deficient mice suffer from abnormally higher levels of oxidative stress, which was alleviated by 17β-estradiol treatment (Fig. 6). There is growing evidence that oxidative stress is increased in myocardial failure and may contribute to structural and functional impairments that accelerate the disease progression [42][43][44]. We found that cardiac levels of superoxide and 4-HNE (a product of lipid peroxidation) were significantly increased in KL(− /− ) mice, which were attenuated by 17β-estradiol. Klotho deficiency increased the formation of cardiac protein oxidation as measured by carbonyl groups (Figs. 6 and S3). Oxidative stress could contribute to organ damage and remodeling [15,24,27,45,46]. Treatment with 17β-estradiol attenuated cardiac protein oxidation and cardiac fibrosis and remodeling (Fig. 6, Fig. S6). Oxidative stress leads to mitochondrial protein misfolding, DNA damage and lipid oxidation, which impair energy production and cardiac contractile function, potentially leading to cell apoptosis [47]. Loss of cardiomyocytes is an important mechanism in the development of myocardial remodeling and heart failure. Mitochondria generate adenosine triphosphate (ATP), a major source of energy in the heart. Klotho deficiency impaired mitochondrial function as manifested by a significant reduction in cardiac ATP generation in KL(− /− ) mice, which was improved by 17β-estradiol treatment (Fig. 7). The activities of mitochondrial respiratory chain enzyme complex I and complex IV were impaired due to Klotho deficiency, which can be rescued by 17β-estradiol treatment. The cytochrome c, a small hemeprotein found loosely associated with the inner membrane of the mitochondrion, is an essential component of the electron transport chain, where it carries one electron. Cytochrome c is widely believed to be localized solely in the mitochondrial intermembrane space under normal physiological conditions. The release of cytochrome c from mitochondria to the cytosol, where it activates the caspase family of proteases, is believed to be a primary trigger leading to the onset of apoptosis [48]. Measuring the amount of cytochrome c leaking from mitochondria to cytosol is a sensitive method for monitoring the degree of cell apoptosis [32]. Cytochrome c was decreased in mitochondrial fraction but increased in cytosolic fraction in KL(− /− ) mouse hearts, which was largely rescued by 17β-estradiol treatment (Fig. 7). In the cultured cardiac cells, we found that treatment with high concentrations of phosphate downregulated the adenylyl cyclase 4/ cAMP pathway and caused cell apoptosis (Fig. S9). Downregulation of cellular cAMP may impair the mitochondrial respiratory chain enzyme function via the cAMP-PKA pathway [49]. These findings indicate that high phosphate levels have direct detrimental effects in cardiomyocytes (Fig. S9). Thus, Klotho deficiency-induced mitochondrial dysfunction and oxidative stress are likely due to hyperphosphatemia because normalization of serum phosphate levels by LPD prevented Klotho deficiency-induced mitochondrial dysfunction and oxidative stress in male mice (Fig. S3). Oxidative damages (Figs. 6 and S3) may contribute to cardiac hypertrophy and remodeling (Figs. S3 and S6). Treatment with 17β-estradiol attenuated mitochondrial dysfunction and oxidative stress primarily through lowering hyperphosphatemia as a result of inhibition of NaPi co-transporter expression (Fig. 5). The limitation of this study is that we cannot completely exclude the direct beneficial effects of estrogen on Klotho deficiency-induced cardiac oxidative damage. Consistent with increased oxidative stress and mitochondrial dysfunction, cardiac cell apoptosis was dramatically increased in KL(− /− ) mice as evidenced by increased TUNEL labeling and cleaved caspase-3 expression (Figs. 2 and 8). LPD and 17β-estradiol attenuated Klotho deficiency-induced cardiac cell apoptosis. Mitochondrial dysfunction-associated oxidative stress is an important mechanism of cell apoptosis [50]. Taken together, Klotho deficiency-induced cardiomyopathy was also accompanied by immoderate oxidative stress, mitochondrial dysfunction, and cardiac cell apoptosis, which can be prevented by LPD or 17β-estradiol. In summary, Klotho deficiency caused hyperphosphatemia and heart failure. Normalization of serum phosphorus levels by dietary phosphate restriction rescued Klotho deficiency-induced heart failure in male mice. LPD did not prevent estrogen depletion, hyperphosphatemia, or heart failure in female KL(− /− ) mice. Therefore, hyperphosphatemia is the primary cause of Klotho deficiency-induced heart failure (Fig. 9). Furthermore, 17β-estradiol maintains normal phosphate balance via regulating renal NaPi co-transporter expression, which prevents hyperphosphatemia, cardiac remodeling and dysfunction in female KL(− /− ) mice. The beneficial effects of LPD and estrogen may be achieved by attenuating cardiac oxidative stress, mitochondrial dysfunction, and cardiac cell apoptosis. Perspective Heart failure is the major cause of mortality in patients with chronic kidney disease (CKD). A decrease in klotho levels is linked to CKD. Here we report that Klotho deficiency causes acute heart failure via hyperphosphatemia which can be prevented by low phosphate diet (LPD) in male mice. Female Klotho-deficient mice suffer from estrogen depletion which upregulates renal Na-Pi co-transporter expression leading to hyperphosphatemia and heart failure. Estrogen treatment prevents klotho deficiency-induced hyperphosphatemia and heart failure in female mice by eliminating upregulation of renal Na-Pi co-transporter expression (Fig. 5). Thus, estrogen improves mitochondrial dysfunction and heart failure in female KL(− /− ) mice likely via normalization of serum phosphorous levels ( Fig. 9). High concentrations of phosphate downregulate the AC4/cAMP pathway (Fig. S9) which impairs mitochondrial respiratory enzyme activity leading to increased superoxide production and oxidative stress and subsequently cardiac cell apoptosis and heart failure (Fig. 9). These findings provide new therapeutic insights into heart failure associated with Klotho deficiency (e.g., CKD). Source of funding This work was supported by NIH R01 AG049780, AG062375, and HL154147. Disclosure No. Author contribution K.C. designed and performed the experiments, analyzed and interpreted the data, and wrote the manuscript. Z.S. provided conceptual design, manuscript editing, and funding management. Declaration of competing interest No. Fig. 9. Central Illustration -The mechanistic pathway of Klotho deficiencyinduced acute heart failure. Klotho deficiency causes acute heart failure via upregulating sodium-phosphate (NaPi) co-transporter expression which leads to hyperphosphatemia. Low phosphate diet and 17 β-estradiol largely rescues Klotho deficiency-induced hyperphosphatemia and acute heart failure in male and female mice, respectively.
2021-10-21T15:04:57.477Z
2021-10-18T00:00:00.000
{ "year": 2021, "sha1": "1f35d8cd2150b378eedccbfacff23c87f6d32e61", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.redox.2021.102173", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fcd2c72c0cfdbf1ab183b21cdde8bd54d21bea15", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
44296603
pes2o/s2orc
v3-fos-license
Radiatively Generated Maximal Mixing Scenario for the Higgs Mass and the Least Fine Tuned Minimal Supersymmetric Standard Model We argue that given the experimental constraints on the Higgs mass the least fine tuned parameter space of minimal supersymmetric standard model is with negative stop masses squared at the grand unification scale. While stop mass squared is typically driven to positive values at the weak scale, the contribution to the Higgs mass squared parameter from the running can be arbitrarily small, which reduces fine tuning of electroweak symmetry breaking. At the same time the stop mixing is necessarily enhanced and the maximal mixing scenario for the Higgs mass can be generated radiatively even when starting with negligible mixing at the unification scale. This highly alleviates constraints on possible models for supersymmetry breaking in which fine tuning is absent. We argue that given the experimental constraints on the Higgs mass the least fine tuned parameter space of minimal supersymmetric standard model is with negative stop masses squared at the grand unification scale. While stop mass squared is typically driven to positive values at the weak scale, the contribution to the Higgs mass squared parameter from the running can be arbitrarily small, which reduces fine tuning of electroweak symmetry breaking. At the same time the stop mixing is necessarily enhanced and the maximal mixing scenario for the Higgs mass can be generated radiatively even when starting with negligible mixing at the unification scale. This highly alleviates constraints on possible models for supersymmetry breaking in which fine tuning is absent. Minimal Supersymmetric Standard Model (MSSM) is a promising candidate for describing physics above the electroweak (EW) scale. The three gauge couplings unify at the GUT (grand unified theory) scale ∼ 2 × 10 16 GeV within a few percent, and the hierarchy between the EW scale and the GUT scale is naturally stabilized by supersymmetry (SUSY). In addition, if we add softsupersymmetry-breaking terms at the GUT scale we typically find that the mass squared of the Higgs doublet which couples to the top quark (H u ), is driven to negative values at the EW scale. This triggers electroweak symmetry breaking and the EW scale is naturally understood from SUSY breaking scale. Furthermore, assuming R-parity, the lightest supersymmetric particle (LSP) is stable and it is a natural candidate for dark matter of the universe. The real virtue of supersymmetry is that the above mentioned features do not require any specific relations between soft-supersymmetry-breaking parameters (SSBs) and the only strong requirement on SUSY breaking scenarios is that these terms are of order the EW scale. However generic SSBs near the EW scale generically predict too light Higgs mass which is ruled out by LEP limits. The exact value of the Higgs mass is not relevant for low energy physics, nothing crucially depends on it, and yet, in order to stay above LEP limits (m h ∼ > 114.4 GeV [1]) the SSBs have to be either considerably above the EW scale or related to each other in a non-trivial way. SSBs can no longer be just generic which leads to strong requirements on possible models for SUSY breaking should these provide a natural explanation for the scale where electroweak symmetry is broken. In this letter we show that such constraints are highly alleviated and the fine tuning is in principle absent in scenarios which have negative stop masses squared at the unification scale. While stop mass squared is typically driven to positive values at the EW scale by gluino loops through renormalization group (RG) running, the contribution to the Higgs mass squared parameter from the running (mostly due to top Yukawa coupling) can be arbitrarily small, which reduces fine tuning of electroweak symmetry breaking. At the same time the stop mixing is necessarily enhanced which is known to enlarge the Higgs mass. Even the maximal mixing scenario for the Higgs mass can be radiatively generated (starting with negligible mixing at the GUT scale). Thus in the least fine tuned scenarios the Higgs mass is highly enhanced without any further assumptions. In spite of having tachyonic scalar masses at a high scale such scenarios are not excluded by our current knowledge of cosmology. We discuss constraints from charge and color breaking minima on possible scenarios. Finally, we discuss a typical spectrum of these scenarios which is characterized by light stop, light higgsino and a fairly light gluino. The tension between the direct search bound on the Higgs mass and naturalness of electroweak symmetry breaking can be summarized as follows [2]. At tree level, the mass of the lightest Higgs boson in MSSM is bounded from above by the mass of the Z boson, where tan β = v u /v d is the ratio of the vacuum expectation values of H u and H d . The dominant one loop correction, in case the stop mixing parameter is small, is proportional to m 4 t log(m 2 t /m 2 t ) (for simplicity we assume mt ≃ mt L ≃ mt R throughout this paper). It depends only logarithmically on stop masses and it has to be large in order to push the Higgs mass above the LEP limit. A two loop calculation (we use FeynHiggs 2.2.10 [3,4] with m t = 172.7 GeV) reveals the stop masses have to be ∼ > 900 GeV. On the other hand, the mass of the Z boson (M Z ≃ 91 GeV) is given from the minimization of the scalar potential as (for tan β ∼ > 5) and the large stop masses directly affect the running of soft scalar mass squared for H u , Numerically the loop factor times large log is of order one for Λ ∼ M GUT and we have δm 2 Hu ≃ −m 2 t . Starting with negligible m 2 Hu at the GUT scale we find m 2 Hu ≃ −m 2 t ≃ −(900 GeV) 2 and the correct Z mass requires that µ 2 (M Z ) is tuned to m 2 Hu (M Z ) with better than one percent accuracy. Alternatively, we can start from large positive m 2 However, in this case the fine tuning is hidden in m 2 Hu (M GUT ). Small change of the boundary condition m 2 Hu (M GUT ) would generate very different value for the EW scale and the situation is quite similar to the tuning of µ. The situation highly improves when considering large mixing in the stop sector. The mixing is controlled by the ratio of A t −µ cot β and mt. Since we consider parameter space where µ is small to avoid fine tuning and tan β ∼ > 5 in order to maximize the tree level Higgs mass (1), the mixing is simply given by A t /mt. It was realized that mixing A t (M Z )/mt(M Z ) ≃ ±2 maximizes the Higgs mass for given mt [5], while still satisfying constraints to avoid charge and color breaking (CCB) minima [6]. Using FeynHiggs 2.2.10 we find that mt(M Z ) ≃ 300 GeV and GeV (for tan β as small as 6) satisfies the LEP limit on the Higgs mass [22]. Therefore large stop mixing, |A t (M Z )/mt(M Z )| ∼ > 1.5 is crucial for satisfying the LEP limit with light stop masses (the physical stop mass in this case can be as small as current experimental bound, mt 1 ∼ > 100 GeV). Decreasing the mixing requires increasing of mt and finally we end up with mt ∼ > 900 GeV for small mixing. In order to discuss fine tuning in this case, the approximate solution of RG equation for m 2 Hu , Eq. (3), is not sufficient. For given tan β we can solve RG equations exactly and express EW values of m 2 Hu , µ 2 , and consequently M 2 Z given by Eq. (2), as functions of all GUT scale parameters [7,8]. For tan β = 10, we have: where parameters appearing on the right-hand side are the GUT scale parameters, we do not write the scale explicitely. The contribution of M 2 to the above formula is small and when M 2 ∼ M 3 it cancels between ≃ −0.4M 2 2 term and the mixed ≃ 0.4M 3 M 2 term. Other scalar masses and M 1 appear with negligible coefficients and we neglect them in our discussion. The coefficients in this expression depend only on tan β (they do not change dramatically when varying tan β between 5 and 50) and log(M GUT /M Z ). Let us also express the EW scale values of stop mass squared, gluino mass and top trilinear coupling for tan β = 10 in a similar way: In the case of mt the coefficients represent averages of exact coefficients that would appear in separate expressions for m 2 tL and m 2 tR . In the limit when the stop mass, mt(M Z ) ≃ 300 GeV, originates mainly from M 3 , from Eq. (5) we see we need M 3 ≃ 130 GeV. Then Eq. (7) shows that the necessary |A t (M Z )| ≃ 500 GeV is obtained only when A t ∼ < −1000 GeV or A t ∼ > 4000 GeV at the GUT scale, in both cases it has to be signifficantly larger than other SSBs. The contribution from the terms in Eq. (4) containing M 3 and A t is at least (600 GeV) 2 and therefore large radiative correction has to be cancelled either by µ 2 or m 2 Hu (M GUT ). If mt is not negligible at the GUT scale, M 3 can be smaller, but in this case we need even larger A t and the conclusion is basically the same. The situation improved by considering large A t term. However, we still need at least 3 % fine tuning. Although M Z results from cancellations between SSBs [23] it does not mean that it is necessarily fine tuned. SUSY breaking scenarios typically produce SSBs which are related to each other in a specific way in which case we should not treat each one of them separately. Although, in this case, our conclusions about the level of fine tuning are irrelevant, the discussion above tells us what relations between SSBs have to be generated, should the M Z emerge in a natural way. For instance, it was recently discussed that fine tuning can be reduced with a proper mixture of anomaly and modulus mediation [9,10,11] which produces boundary conditions leading to large stop mixing at the EW scale and an initial value of m 2 Hu canceling most of the contribution from running. Even if SUSY breaking scenario produces SSBs related to each other in a way that guarantees large degree of cancellation, still they cannot be arbitrarily heavy because in that case the M Z much smaller than superpartner masses would emerge as a coincidence and we would not have a natural explanation for it. This "coincidence" problem is further amplified by the fact that the relations that have to be satisfied between SSBs in order to recover the correct M Z depend on the energy interval they are going to be evolved. Therefore a SUSY breaking scenario would have to know that SSBs will evolve according to MSSM RG equations, and exactly from M GUT to M Z . There is one possibility which to large extend overcomes this problem. If we allow negative stop masses squared at the GUT scale several interesting things happen simultaneously. First of all, from Eq. (5) we see that unless mt is too large compared to M 3 it will run to positive values at the EW scale. At the same time the contribution to m 2 Hu from the energy interval where m 2 t < 0 partially or even exactly cancels the contribution from the energy interval where m 2 t > 0 and so the EW scale value of m 2 Hu can be arbitrarily close to the starting value at M GUT , see Fig. 1. From Eq. (4) we see that this happens for m 2 t ≃ −4M 2 3 (neglecting A t ). No cancellation between initial value of m 2 Hu (or µ) and the contribution from the running is required. And finally, from Eqs. (5) and (7) we see that the stop mixing is typically much larger than in the case with positive stop masses squared. For positive (negative) stop masses squared we find |A t (M Z )/mt(M Z )| ∼ < 1 ( ∼ > 1) starting with A t = 0 and small mt at the GUT scale. Starting with larger mt the mixing is even smaller (larger) in the positive (negative) case. Therefore large stop mixing at the EW scale is generic in this scenario and actually it would require very large GUT scale values of A t to end up with small mixing at the EW scale. It turns out that in the region where m 2 Hu gets negligible contribution from running, the radiatively generated stop mixing is close to maximal even when starting with negligible mixing at the GUT scale. In this case, comparing Eqs. (5) and (7), we find [24] Slightly more negative stop masses squared at the GUT scale would result in maximal stop mixing at the EW scale even when starting with negligible A t . Nevertheless the example in Fig. 1 and µ 2 at the GUT scale naturally result in the correct M Z . However, the absence of fine tuning is quite robust and the relation above does not have to be satisfied very precisely. If we define α by then the EW scale (4) can be written as We see that requiring fine tuning less than 10%, large range of α is allowed (for M 3 ≃ 200 GeV): This interval is shrinking with increasing M 3 which is a sign of the coincidence problem discussed above. In summary, a very reasonable set of SSBs at the GUT scale: 3)M 3 and A t of order the other SSBs or smaller naturally reproduces the correct EW scale. The EW scale value of m 2 Hu is very close to the starting value at the GUT scale. In a simplified way this can be understood as effectively lowering the scale where SSBs are generated to the scale where mt ≃ 0 (in the example in Fig. 1 it is 10 TeV). From this scale SSBs run in a similar way they would run when starting with positive stop masses. However this scale is much closer to the EW scale and so δm 2 Hu , Eq. (3), generated between this scale and the EW scale is considerably smaller. The stop mixing at the EW scale is close to maximal, but it is generated radiatively starting from a small mixing at the GUT scale. It is to be compared with the positive case which requires A t to be several times larger than other SSBs in order to produce large enough mixing to satisfy LEP bounds on the Higgs mass. Thus considering negative values for stop masses squared keeps the desirable feature of radiative electroweak symmetry breaking and minimizes fine tuning. The Higgs mass is automatically enhanced and staying above the LEP bound does not require additional constraints on the rest of SUSY parameters. However strong constraints can originate when considering possible CCB minima. At the EW scale all scalar masses squared (except m 2 Hu ) are positive, nevertheless, as already discussed, very large A t term would generate a CCB minimum at around the EW scale [12,13]. Then the EW vacuum should be the global minimum since otherwise the EW vacuum would rapidly tunnel to the CCB minimum as the barrier is neither high nor thick. The optimal sufficient condition to avoid a CCB vacuum in (H u ,t L ,t R ) plane is |A t | ∼ < 2mt [6]. The generated A t in the region we consider (8) may be close but is typically well within this bound. Negative stop masses squared at the GUT scale result in unbounded from below (UFB) potential along the Dflat direction [14,15]. The tree level potential at the GUT scale gets large loop corrections and the RG improved effective potential is no longer UFB. However, it generates a large VEV (compared to the EW scale) CCB minimum. If the potential energy of the CCB minimum is lower than that of the EW minimum, the EW minimum can tunnel to the CCB minimum. In most of parameter space the tunneling rate is too small and the EW vacuum can live longer than the age of the universe [16,17]. More precisely, the longetivity of the metastable EW vacuum puts a constraint mt(M Z ) ∼ > 1 10 M 3 (M Z ) [16], and again the region of parameter space we consider is entirely safe from this bound (nevertheless it tells us that stop masses squared cannot be arbitrarily large and negative at the GUT scale). A possible problem is that after inflation the universe is likely to settle down in a large VEV CCB vacuum rather than the EW vacuum. This is worrisome since the tunneling rate to the EW vacuum would be very small. However, if the reheating temperature is high enough, the large VEV CCB minimum might disappear in finite temperature effective potential. For a given set of SSBs, there is a minimum reheating temperature above which the large VEV CCB vacuum disappears [18]. It depends on how inflation ends and SSBs will constrain compatible inflation scenarios. In this letter we focused on the SUSY parameters relevant for radiative EWSB and discussion of fine tuning. An interesting signature of this scenario is stop splitting, mt 1,t2 ≃ mt(M Z ) ∓ m t , and stops considerably lighter than gluino: mt(M Z ) ∼ < 0.5mg with mg ∼ > 600 GeV (the lighter stop thus can be as light as 130 GeV). Besides these the scenario has a light Higgsino, a possible candidate for LSP. Other scalar soft masses squared are unconstrained by considerations of fine tuning and can be positive or even all negative at the GUT scale in complete models. In some SUSY breaking scenarios the sign of the scalar masses squared is not determined [19] while in others it can be fixed and negative. For example, negative scalar masses squared arise in gauge mediation with gauge fields as messengers [20] or one can utilize the minus sign arising in the see-saw mechanism for scalar masses [21]. It is desirable to build fully realistic models of this type in which constrains from CCB can be addressed. In specific scenarios additional potential problems may occur, like negative slepton masses squared at the EW scale since the right-handed sleptons receive contribution only from M 1 . Even if this contribution is large enough to drive sleptons positive, we still can end up with stau LSP. Nevertheless all the positive features of negative stop masses squared suggest it is worthwhile to seriously search for models which can lead to boundary conditions discussed above. [23] To some extend this is already signaled by bounds on masses of superpartners from direct searches, however the limits on the Higgs mass and the above discussion make this absolutely clear. [24] To be more precise the generated mixing is somewhat larger than that shown in this equation, since we should minimize the potential at the SUSY scale ∼ mt and should not run SSBs all the way to MZ (see Fig. 1).
2018-04-03T05:43:16.788Z
2006-01-05T00:00:00.000
{ "year": 2006, "sha1": "5796230101933d73d036864e79c02395b9e0c80d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0601036", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5796230101933d73d036864e79c02395b9e0c80d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
14097406
pes2o/s2orc
v3-fos-license
Efficient silencing of gene expression with modular trimeric Pol II expression cassettes comprising microRNA shuttles Expressed polycistronic microRNA (miR) cassettes have useful properties that can be utilized for RNA interference (RNAi)-based gene silencing. To advance their application we generated modular trimeric anti-hepatitis B virus (HBV) Pol II cassettes encoding primary (pri)-miR-31-derived shuttles that target three different viral genome sites. A panel of six expression cassettes, comprising each of the possible ordering combinations of the pri-miR-31 shuttles, was initially tested. Effective silencing of individual target sequences was achieved in transfected cells and transcribed pri-miR trimers generated intended guide strands. There was, however, variation in processing and silencing by each of the shuttles. In some cases the monomers’ position within the trimers influenced processing and this correlated with target silencing. Compromised efficacy could be compensated by substituting the pri-miR-31 backbone with a pri-miR-30a scaffold. Inhibition of HBV replication was achieved in vivo, and in cell culture without disruption of endogenous miR function or induction of the interferon response. A mutant HBV target sequence, with changes in one of the guide cognates, was also silenced by the trimeric cassettes. The modular nature of the cassettes together with compatibility with expression from Pol II promoters should be advantageous for gene silencing applications requiring simultaneous targeting of different sites. INTRODUCTION The powerful and specific gene silencing that may be achieved by harnessing the RNA interference (RNAi) pathway is potentially useful for developing new therapies required to treat a variety of diseases. In addition, application of RNAi has utility for the study of gene function. Both synthetic and expressed sequences are being developed to activate RNAi (1). Exogenous expression cassettes achieve this by transcribing mimics of intermediates of the microRNA (miR) processing pathway (2). Short hairpin RNAs (shRNAs), which are typically expressed from Pol III promoters, simulate precursor miR (pre-miR) products of Drosha/DGCR8 processing. Primary miR (pri-miR) shuttles are analogues of nascent miR transcripts and their processing is compatible with expression from Pol II transcription regulatory elements (3)(4)(5)(6)(7). This important property provides the means of improving control of production of RNAi activators and thereby limiting unwanted off target effects caused by saturating the endogenous miR pathway (8). Pri-miR-like shuttles are also thought to effect superior silencing by simulating natural miR processing more closely. Processing of pri-miR shuttles by Drosha/DGCR8, which is bypassed by shRNAs, may improve entry into the RNAi pathway (4). pri-miR-30 was initially the most widely utilized (9)(10)(11) backbone, but other pri-miR shuttles such as miR-155 (6) miR-31 and miR-122 (3) have since been used successfully to generate exogenous RNAi effecters. The polycistronic arrangement of some naturally occurring miR clusters is an additional property that may be exploited to generate combinatorial multitargeting RNAi expression cassettes (4,6,7). This is particularly useful to improve knockdown efficacy and overcome attenuation of silencing caused by target site mutation such as often occurs during chronic viral infection. Recently, the miR-106 (7) and miR-17-92 (4) polycistronic clusters have been used successfully to generate multiplexed anti-HIV-1 RNAi activators. To improve use of expressed multimeric RNAi effecters, a system that allows convenient assembly, modification to improve silencing efficacy and which causes knockdown without disrupting the endogenous miR pathways would be valuable. We demonstrate these attributes in a panel of anti-hepatitis B virus (HBV) Pol II trimeric pri-miR cassettes, which are capable of inhibiting viral replication in transfected cells and in vivo. miR-16 sponge and dual luciferase target The U6-driven miR-16 sponge (15) was generated by cloning seven copies of an imperfectly complementary target of miR-16 into the U6 + 27 sequence (16)(17)(18). A single copy of duplex DNA, comprising annealed oligonucleotides encoding a single copy of the miR-16 target site (miR-16S) with single nucleotide 3 0 A overhangs, was initially ligated to pTZ57R/T to create pTZ-miR-16SÂ1. The resulting target sequence included an XhoI site that was 5 0 of the target and SalI and NotI sites 3 0 of this sequence. Oligonucleotide sequences used to generate the inserts were miR-16S F: 5 0 -CTC GAG CGC CAA TAT TAT GTG CTG CTA GTC GAC GCG GCC GCA-3 0 and miR-16S R: 5 0 -GCG GCC GCG TCG ACT AGC AGC ACA TAA TAT TGG CGC TCG AGA-3 0 . The miR-16SÂ1 sequence was restricted from pTZ-miR-16SÂ1R (insert in reverse orientation with respect to the -galactosidase gene) with ApaI and PvuII and cloned into the ApaI and HincII sites of pGEM Õ -T Easy (Promega, WI, USA) to create pG-miR-16SÂ1. To generate vectors with tandem copies of the miR-16S sequence, pG-miR-16SÂ1 was digested with XhoI and ScaI and separately with SalI and ScaI. The fragments containing the miR-16S sequence from each digestion were ligated to create pG-miR-16SÂ2. pG-miR-16SÂ3 and pG-miR-16SÂ4 were generated using similar procedures. Finally, the vectors containing three and four tandem copies of the miR-16S sequence were used to create pG-miR-16SÂ7. The U6 + 27 sequence (18) was produced using a twostep PCR of the human U6 promoter. U6 forward (U6 F, 5 0 -GAT CTC TAG AAA GGT CGG GCA GGA AGA GGG-3 0 ) and U6 + 27 reverse 1 (U6 + 27 R1, 5 0 -CTC GAG TAG TAT ATG TGC TGC CGA AGC GAG CAC GGT GTT TCG TCC TTT CCA C-3 0 ) primers used in the first round of amplification. Amplicons from this reaction were used as template for the second round of PCR using the U6 + 27 R2 primer (5 0 -GAT CAA AAA AGC GGA CCG AAG TCC GCT CTA GAC TCG AGT AGT ATA TGT GCT G-3 0 ) and U6 F primer. The complete U6 + 27 sequence was inserted into the PCR cloning vector pTZ57R/T to generate pTZ-U6 + 27. The miR-16SÂ7 sequence was removed from pG-miR-16SÂ7 with XhoI and SalI and ligated to the XhoI site of pTZ-U6 + 27 to produce the pTZ-U6-miR-16SÂ7 sponge plasmid. To generate the psiCHECK-miR-16TÂ7 target vector containing 7 miR-16 sites downstream of the Renilla luciferase ORF, the miR-16SÂ7 sequence was restricted from pG-miR-16SÂ7 with XhoI and NotI and inserted into equivalent sites of psiCHECK TM -2 (Promega, WI, USA). Cell culture, transfection, northern blot analysis and dual luciferase assay Huh7 cells were cultured in DMEM (Lonza, Basel, Switzerland) supplemented with 10% fetal calf serum (Gibco BRL, UK). To determine the efficacy of individual pri-miR monomers in the context of multimeric cassettes, each trimeric plasmid (800 ng) was co-transfected with psiCHECK-5T, psiCHECK-8T or psiCHECK-9T (80 ng). Luciferase activity was assayed using the Dual-Luciferase Õ Reporter Assay System (Promega, WI, USA) and Renilla luciferase to Firefly luciferase activity was determined. Silencing of mutant HBx sequences was assayed similarly by using psiCHECK-HBx and psiCHECK-mHBx dual luciferase reporter vectors. To assess HBV knockdown efficacy of the Pol III and Pol II pri-miR shuttles, Lipofectamine 2000 TM (Invitrogen, CA, USA) was used to co-transfect 80 ng pCH-FLuc, 800 ng of the relevant pri-miR shuttle plasmid, together with effecter plasmid or vector control plasmid according to previously described methods (3). phRL-CMV (Promega, WI, USA), a plasmid constitutively expressing Renilla luciferase, was included in all transfections. Forty-eight hours after transfection cells were assayed for luciferase activity using the Dual-Luciferase Õ Reporter Assay System (Promega, WI, USA) and the ratio of Firefly luciferase to Renilla luciferase activity was calculated. Northern blot analysis was performed on RNA extracted from cells transfected with the various miR-31 shuttle constructs according to previously described methods (3). The probes for the 5, 8 and 9 guide sequences were 5 0 -CCG TGT GCA CTT CGC TTC-3 0 , 5 0 -CAA TGT CAA CGA CCG ACC-3 0 and 5 0 -TAG GAG GCT GTA GGC ATA-3 0 , respectively. Scanned autoradiographs were used to quantitate guide bands using KODAK MI Software. Knockdown of HBV replication was assessed in cells co-transfected with pCH-9-3091(14) and relevant RNAi effecter plasmid. Forty-eight hours after transfection growth medium was harvested and HBsAg secretion measured by ELISA using the MONOLISA Õ HBs Ag ULTRA kit (Bio-Rad, CA, USA). Activation of the interferon (IFN) response was assessed using previously described methods (19). Assays to assess saturation of the endogenous miR pathway were performed in Huh7 cells cotransfected with 80 ng psiCHECK-miR-16TÂ7 and 800 ng RNAi effecter plasmids or miR-16 sponge plasmid. Luciferase assays were performed as described above. Assessment of efficacy of pri-miR-31 shuttles in vivo Mice were injected using the hydrodynamic injection procedure with a combination of 5 mg pCH-9-3091 (14), 5 mg of RNAi expression vector, 5 mg of control U6 (pTZ-U6 vector (12)) or CMV (pCI-neo, Promega, WI, USA) promoter-containing backbone plasmid and 5 mg of psiCHECK2.2. Blood was collected 3 and 5 days postinjection. Experiments were carried out according to protocols approved by the University of the Witwatersrand Animal Ethics Screening Committee. ELISA for HBsAg levels was performed on serum samples using the MONOLISA Õ HBs Ag ULTRA kit from Bio-Rad. Statistical analysis Data are expressed as the mean AE standard error of the mean (SEM). Statistical difference was considered significant when P < 0.05 and was determined according to the Student's paired two-tailed t-test. Calculations were done with the GraphPad Prism software package (GraphPad Software Inc., CA, USA). Design of trimeric pri-miR expression cassettes Structure of the expression cassettes producing trimeric anti-HBV pri-miR-31 mimics is depicted schematically in Figure 1A. The pri-miR-31 backbone was initially selected as we have previously shown that single unit shuttles with this scaffold can be used to generate efficient Pol II anti-HBV expression cassettes (3). Sequences encoding the pri-miR-31 trimers were located within an exon and downstream of a CMV Pol II transcription controlling element and intron sequence. Trimeric cassettes were designed such that pre-miRs comprised 59 nt and were flanked by 51 nt of natural pri-miR-31/derived sequences ( Figure 1B). According to this scheme, the mature anti-HBV miRs were predicted, using the MFold algorithm (20), to have a similar structure to that of naturally occurring pri-miR-31. To assess the modular nature of the cassettes, six different trimeric expression cassettes were generated using all possible ordering combinations of the three pri-miR-31 shuttles. Computer-based predictions indicated that the intended miR-31-like structures of the trimeric cassettes were energetically most favourable and similar for each of the six ordering combinations. The calculated ÁG values of each was approximately -195 kcal/mol. Detection of processed pri-miR sequences and silencing of individual targets To verify formation of individual guide sequences, northern blot analysis was carried out on RNA extracted from Huh7 liver-derived cells transfected with DNA-expressing pri-miR-31 shuttles (Figure 2A-C). Hybridization to a probe complementary to the intended miR-31/5 HBV guide showed heterogenous processing to form guide sequences of 20-22 nt in length. Guide strand 5 production was similar when generated from monomeric and trimeric cassettes and was not affected by shuttle position within the anti-HBV polycistron. As expected, measurement of relative guide band intensities showed that the 20-22 nt HBV anti-sense sequence was present in considerably higher amounts in cells transfected with U6 shRNA 5-expressing plasmid when compared to cells transfected with the pri-miR trimer shuttles. Specific knockdown of target 5 sequence, assessed using a dual luciferase reporter system, was similar and highly effective ($90%) for the U6 shRNA 5 and each of pri-miR-31 shuttle expression cassettes ( Figure 2D). Northern blot analysis to detect guide 8 revealed a single band of 21 nt ( Figure 2B), which was distinct from the heterogenous mature pri-miR-31/5 sequences. Interestingly, no mature pri-miR-31/8 was detectable in RNA extracted from cells transfected with CMV pri-miR-31/9-8-5 and CMV pri-miR-31/5-9-8. This suggests that pri-miR-31/8 shuttle position within the trimer affects processing, and presence of pri-miR-31/9 immediately upstream of pri-miR-31/8 may be responsible for compromised guide 8 production. Assay of knockdown using a dual luciferase assay with miR-31/8 target alone, confirmed that knockdown of reporter expression correlated with detection of mature miR sequences ( Figure 2E). Interestingly, guide produced from the U6 shRNA 8 cassette was slightly larger than that of the CMV miR-31-derived sequences. It is likely that the differences in secondary structure of the shRNA 8 and pri-miR-31/8 RNA, as well as the involvement of Drosha in miR shuttle processing, are responsible for generation of guide strands of different molecular weight. Analysis of silencing and processing of pri-miR-31/9 sequences showed less efficient knockdown (45-80%) of ORF containing pri-miR-31/9 target ( Figure 2F), which correlated with lower efficiency of pri-miR-31/9 guide production ( Figure 2C). Although there is variation in the efficiency of individual guide strand production and knockdown, these data indicate that the pri-miR-31 scaffold is useful for production of Pol II trimeric cassettes. Incorporating a pri-miR-30a monomer scaffold improves silencing of HBV target 8 Sequence-specific properties of the individual anti-HBV pri-miR-31 shuttles as well as position of monomer shuttles within the trimers are likely to influence their processing and silencing efficiency. To assess the effect of pri-miR backbone scaffold sequences within the expression cassettes, silencing of target 8 sequences by an expanded panel of trimeric expression cassettes that included pri-miR-122/5 and pri-miR-30a/8 shuttles was measured ( Figure 3). Each of the six pri-miR-31 trimers together with pCMV pri-miR-31/5-31/9-30a/8 and pCMV pri-miR-122/5-31/9-30a/8 were co-transfected with psiCHECK 8T. As described before ( Figure 2E), knockdown of Renilla luciferase activity was poor with cassettes containing pri-miR-31/5-9-8 and pri-miR-31/9-8-5. However, efficient inhibition of reporter gene activity was achieved with pCMV pri-miR-31/5-31/9-30a/8 and pCMV pri-miR-122/5-31/9-30a/8 ( Figure 3B). This indicates that substitution of the pri-miR-31 scaffold with a pri-miR-30a backbone restores target 8 silencing. In addition, inclusion of the pri-miR-122/5 monomer, which we have previously shown to act efficiently against HBV (3), does not compromise silencing by the pri-miR-30a/8 monomer. Thus, in addition to allowing improved silencing efficiency by changing monomer positions within the trimers, the cassettes described here have the added advantage of permitting the changing of pri-miR shuttle backbones to compensate for any compromised silencing efficacy of pri-miR-31 scaffolds. miR-mediated inhibition of markers of HBV replication in transfected cells and in vivo Target sites of the individual miR cassettes are located within the HBV X (HBx) ORF ( Figure 4A). This sequence is conserved, common to all viral transcripts and has been shown to be a good target for RNAi-based HBV silencing (19). A dual luciferase assay, in which the surface ORF of pCH-9/3091 was substituted with a Firefly luciferase ORF, demonstrated that each of the trimeric cassettes achieved good knockdown when all three cognates of the intended miR-31/5, miR-31/8 and miR-31/9 guides were present ( Figure 4B). As an initial assessment of pri-miR-31 trimer-mediated inhibition of HBV replication, Huh7 liver-derived cells were transfected with the pCH-9-3091 HBV replication competent plasmid (14), together with the panel of RNAi expression plasmids. Secreted HBV surface antigen (HBsAg), which is a reliable indicator of HBV replication in our hands (3,19,21), was determined thereafter in the culture medium ( Figure 4C). Knockdown of $90% was achieved. This correlated with the inhibitory effect that was observed when using the dual luciferase reporter system to measure silencing of individual targets ( Figure 2D-F) and also inhibition of a Firefly luciferase-HBx reporter gene construct ( Figure 4B). To determine silencing of target genes in vivo in a model that simulates HBV replication, mice were co-injected with an HBV replication competent plasmid together with a selection of vectors encoding pri-miR-31 shuttles using the hydrodynamic procedure (22). Significant knockdown of HBsAg was observed at Days 3 and 5 after the injection, and the effects appeared to be independent of promoter interference ( Figure 4D). These findings confirm that trimeric pri-miR-31 shuttles are capable of silencing HBV replication and verify that they are active against transcripts that are produced during viral replication in vivo. A potential advantage of employing multimeric cassettes to inhibit viral replication is that viral escape resulting from emergence of evading mutations is limited. A dual luciferase assay was undertaken to assess whether HBV target silencing occurred when mutations were introduced into the target 5 site of HBV ( Figure 5A). Co-transfection of cells was carried out with each plasmid encoding the panel of eight trimeric expression cassettes together with wild-type or mutant HBx target. Silencing of reporter gene expression was achieved with all trimeric expression cassettes with the exception of mutant target silencing by CMV pri-miR-31/5-9-8 and CMV pri-miR-31/9-8-5. This was expected as these two expression cassettes are known to generate antiHBV guide 9 in low amounts and be defective with respect to guide 8 production ( Figure 2). Importantly, mutant target silencing was restored in these cassettes by changing the pri-miR-31/8 monomer for a pri-miR-30a/8 unit, and confirms earlier observations ( Figure 3) that target 8 silencing may be improved by substituting the pri-miR scaffold of its guide. Thus trimeric pri-miR cassettes are capable of efficiently silencing targets containing one mutant guide cognate and defective silencing may be overcome by changing the order of the miR units or scaffold sequence within the monomers. Exclusion of non-specific effects induced by pri-miR-31 trimer cassettes Verification that the pri-miR-31 trimer cassettes are indeed non-toxic and induce gene silencing by an RNAimediated mechanism is important to establish. To address this, disruption of the endogenous miR pathway and stimulation of the innate IFN response by pri-miR-31 trimer cassettes were assessed. Measurement of IFN-mRNA concentration in cells transfected with trimer expression cassettes showed no elevation of this transcript, indicating that little or no immunostimulation is caused by IFN pathway induction (Figure 6). To assess disruption of the endogenous miR pathway, we adapted the recently described method that utilizes miR sponges as a control to verify derepressive effects of endogenous miR Figure 3. Use of shuttles containing pri-miR-31, pri-miR-30a and pri-miR-122 to improve silencing of HBV target 8 sequence. (A) Predicted structures and sequences of pri-miR-30a/8 and pri-miR-122/5 anti-HBV sequences. Colour coding of the sequences representing putative pre-miRs and mature guides are as indicated in Figure 1B. (B) Assessment of knockdown efficacy of trimer shuttles using a dual luciferase reporter gene assay in which a target sequence complementary to guide 8 was inserted downstream of the Renilla luciferase reporter ORF of psiCHECK2.2. Data are represented as mean ratios of Renilla to Firefly luciferase activity (AESEM) and are normalized relative to the mock-treated cells. function (15). A dual luciferase reporter vector was generated in which seven copies of an imperfectly matched endogenous miR-16 target were inserted downstream of the Renilla luciferase ORF ( Figure 7A). Perturbations in miR-16 translational suppression could be detected sensitively by measuring Renilla/Firefly luciferase reporter gene activity. miR-16 was selected for this assay as it is expressed in a variety of tissues (23) and can be conveniently used to determine disruption of natural miR function. A miR sponge expression cassette that encodes seven tandemly repeated miR-16 target sites was used to control for endogenous miR derepression ( Figure 7B). Analysis revealed that co-expression of each of the trimeric constructs within transfected cells did not cause derepression of miR-16 inhibition of its cognate in the reporter fusion sequence ( Figure 7C). The Pol II promoter-controlled expression of trimeric anti-HBV miR-31 shuttles, therefore, cause no detectable toxicity that results from IFN response induction or disruption of the endogenous miR pathway. DISCUSSION The powerful gene silencing that can be achieved by harnessing RNAi has facilitated development of new approaches to inhibition of pathology-causing genes and the study of gene function (1). Although synthetic siRNAs have been favoured as RNAi activators for many such applications, use of expressed silencing sequences has several advantages. These include achievement of sustained knockdown, compatibility with recombinant viral vectors and evasion of some of the immunostimulatory properties of exogenous synthetic sequences (24). Convenient expression of silencing sequences that efficiently target multiple with ORFs and sites within the pCH-9/3091 vector that are targets complementary to processed products of pri-miR-31/5, pri-miR-31/8 and pri-miR-31/9 expressing vectors. Four parallel arrows indicate the HBV transcripts, which have common 3 0 ends, and include the pri-miR-31/5, pri-miR-31/8 and pri-miR-31/9 targets. The pCH-9/3091-derived pCH-FLuc target vector has the Firefly luciferase ORF substituted for the preS2/S HBV sequence. (B) Luciferase reporter gene-based assay of knockdown efficacy in situ. pCH-FLuc was cotransfected with plasmids containing indicated RNAi expression cassettes in addition to a plasmid constitutively expressing Renilla luciferase. Results are given as ratios of Firefly to Renilla luciferase activity. Column-labelled negative represents data from transfections that excluded the pCH-FLuc plasmid. (C) Concentration of HBsAg in culture supernatants of Huh7 cells 48 h after transfection with pCH-9/3091 HBV replication-competent plasmid together with indicated anti-HBV expression cassettes. Column-labelled negative represents data from transfections that excluded the pCH-9/3091 HBV plasmid. (D) Silencing of HBV replication in vivo. Serum concentration of HBsAg was measured at Days 3 and 5 after hydrodynamic injection of mice with replication-competent vector and plasmids expressing anti-HBV RNAi sequences. Mock injections included control backbone plasmids containing U6 or CMV promoters that did not express anti-HBV RNAi effecters. sites would be a particularly useful attribute to enhance knockdown and counter evading target mutations. Achieving this without causing unintended off target effects and needing to utilize complex systems that require multiple expression cassettes is desirable. The engineered polycistrons described here provide a suitable method to attain these objectives. Cassettes were generated using pri-miR-31-, pri-miR-30a-and pri-miR-122-derived modules, which were combined as trimers and expressed from a Pol II promoter. Efficient processing of the shuttles and silencing of HBV cognates was observed without evidence for disruption of the endogenous miR pathway. Detailed analysis revealed variation in efficacy of individual units that was dependent on specific sequences of the monomers as well as their position within the engineered polycistrons. Despite simultaneous production of three RNAi effecters from a single transcript, the mature guide sequences were not formed in equimolar amounts. Although knockdown of single targets may be compromised as a result of poor processing of individual guides, the modular nature of the cassettes facilitates improvement of defective silencing. Rearranging the order of the pri-miR units, which is not easily achieved with polycistronic miR cluster mimics, may restore function of individual miR shuttles. In addition, the cassettes described here allow improvement of efficacy to be achieved by substituting poorly acting pri-miR-31 scaffolds with other backbone shuttles, such as those derived from pri-miR-30a and pri-miR-122. The sequence-specific differences in individual guide processing and target knockdown that we observed are not surprising but currently difficult to explain. Although computer-predicted structures of the shuttles were similar, empirical characterization of the processing of expressed RNAi effecters remains critically important. Previous investigations have demonstrated that a combinatorial approach to knockdown of HIV-1 replication augments silencing and prevents the emergence of viral escape mutants (25). It has been calculated that four optimally acting individual antiviral guide sequences are required to prevent HIV-1 escape from RNAi (26), and anti-HIV-1 RNAi activators have been designed accordingly (4,7). Unlike with HIV-1, the HBV genome comprises overlapping ORFs with embedded viral cis elements (27). This highly compact arrangement of the genome restricts ability of HBV to mutate without compromising its replication fitness. The number of RNAi effecters within a combinatorial cassette that is required to prevent emergence of HBV escape mutants is not established, but it is likely to be fewer than the four that are required for HIV-1. Nevertheless, although only three pri-miR-31 shuttles were tested in the polycistronic cassettes described here, it is likely that a larger number of monomeric modules can be accommodated. An important concern for the development of RNAi-based therapy is avoidance of off target effects. Unintended consequences may result from disruption of endogenous miR functions and silencing of normal cellular genes that have partial sequence complementarity to exogenous RNAi activators. We have shown that endogenous miR-16-mediated repression of a target reporter fusion is unaffected by expression of the Pol II pri-miR-31 trimer cassettes. Generating multiple silencing sequences from an engineered polycistronic cassette potentially increases the likelihood of causing unintended off target effects. Interaction of a guide seed region, comprising nucleotides 2-8 from the 5 0 end, is potentially sufficient to effect translational suppression of a cellular target. This weak sequence restraint emphasizes the importance of utilizing potent expressed silencing sequences that are effective at low concentration, and also restricting the expression of RNAi effecters to target tissues. Compatibility of polycistronic pri-miR shuttles with expression from Pol II promoters, and efficacy that is equivalent to that of shRNA transcribed from a U6 promoter, are useful features that may be harnessed to diminish unintended effects. Compared to Pol III promoters, Pol II transcription regulatory elements have greater versatility that facilitates production of mature RNAi effecters without perturbing endogenous miR function. In the case of developing RNAi-based HBV therapy, expression cassettes containing liver-specific promoters that are induced by target-encoded transcription activators, e.g. the HBV X protein (28,29), should facilitate transcription regulation with consequent attenuation of off target effects. Schematic illustration to show dual luciferase psiCHECK-derived vector with seven copies of miR-16 target inserted downstream of the Renilla luciferase ORF. Firefly luciferase constitutively expressed from the same plasmid was used to normalize data. (B) Schematic illustration of expression cassette that generates a transcript containing seven copies of an imperfectly matched miR-16 target. The transcript contains 5 0 U6+27 and 3 0 stem sequences, which are thought to improve stability of U6 Pol III transcripts. (C) Analysis of effects of pri-miR-31 expression cassettes on endogenous miR-16 repression of target reporter sequence using a dual luciferase assay. Co-transfection of reporter plasmid, containing seven copies of miR-16 target inserted downstream of the Renilla luciferase ORF, was carried out together with RNAi expression cassettes, empty backbone plasmid (mock) or miR-16 sponge plasmid. Ratio of Renilla to Firefly luciferase activity was measured to assess derepression of endogenous miR-16 by coexpressed miR shuttles.
2014-10-01T00:00:00.000Z
2009-05-27T00:00:00.000
{ "year": 2009, "sha1": "d3e50307083954aff9459f34dc7e1271db26ffda", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/37/13/e91/7701265/gkp446.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d198f09900f8dccbc8173b1896d6985e852aad9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
103166645
pes2o/s2orc
v3-fos-license
Promoting superconductivity in FeSe films via fine manipulation of crystal lattice Stabilized FeSe thin films in ambient pressure with tunable superconductivity would be a promising candidate for superconducting electronic devices yet its superconducting transition temperature (Tc) is below 10 K in bulk materials. By carefully controlling the depositions on twelve kinds of substrates using pulsed laser deposition technique, high quality single crystalline FeSe samples were fabricated with full width of half maximum 0.515? in the rocking curve and clear four-fold symmetry in phi-scan from x-ray diractions. The films have a maximum Tc 15 K on the CaF2 substrate and do not show obvious decay in the air for more than half a year. Slightly tuning the stoichiometry of the FeSe targets, the Tc becomes adjustable from 15 to<2 K with quite narrow transition widths less than 2 K, and shows a positive relation with the out-of-plane (c-axis) lattice parameter of the films. However, there is no clear relation between the Tc and the surface atomic distance of the substrates. By reducing the thickness of the films, the Tc decreases and fades away in samples of less than 10 nm, suggesting that the strain effect is not responsible for the enhancement of Tc in our experiments. I. INTRODUCTION The Fe-based superconductors have attracted blooming attention for its superconducting nature and promising applications. [1,2] Among various Febased superconductors, β-FeSe possesses the simplest structure for an anti-PbO-type structure (space group of P 4/nmm) with stacks of Fe 2 Se 2 layers, but displays the most multifarious physical properties. [3,4] The FeSe bulk crystals exhibit a superconducting transition temperature (T c ) of 9 K. [5,6] Under external pressure, the T c can be enhanced up to 38 K for FeSe, which is attributed to a decrease of anion height from the Fe-square planes, highlighting the impact of crystal lattice on superconductivity. [5,7,8] Unexpectedly, the T c can be raised further for one unit cell (UC) of FeSe on SrTiO 3 substrate. [9][10][11][12][13] Because of the extreme sensitivity to oxygen, such ultra-thin films of one or a few UC can only be achieved in high vacuum for in-situ fabrications and characterizations, which limits the researches and applications. Therefore, more stabilized FeSe films comparable to or even better than the bulk crystals are highly desired for the next generation of superconducting electronic devices. The fabrication of FeSe thin films has been widely studied, and among these researches, pulsed laser deposition (PLD) and molecular beam epitaxy (MBE) techniques are most commonly used. [2,[14][15][16] From the * yuanjie@iphy.ac.cn † junli@nju.edu.cn ‡ kuijin@iphy.ac.cn application point of view, PLD is much more efficient for the growth of films with moderate thickness (above 100 nm). [15][16][17][18][19][20] Basically, the FeSe can be grown onto various substrates, such as LaAlO 3 , SrTiO 3 and MgO, but their T c values are generally equal to or lower than that of bulk crystals. A recent report showed that another substrate, CaF 2 , could enhance the T c up to 11.4 K with thickness of 150 nm, considerably higher than that of bulk FeSe, which was attributed to the in-plane compressive strain from the substrate. [21] The strain by mismatch between the substrate and the film may play a role in promoting the T c but usually takes effect within limited film thickness, plausibly for the ultra-thin FeSe films where the T c decreases quickly from 1 UC to 3 UC. [22] Therefore, it is still an open question why the T c is enhanced in thick films like 160 nm. Besides the strain induced by the lattice mismatch, [18,21,23] other effects, such as modification of the out-ofplane lattice parameter, [8] sample inhomogeneity by Fe vacancies, [24,25] as well as the growth conditions, [15] could also influence the superconductivity. To uncover the substrate effects on the superconductivity, it is important to systematically study the crystal lattice and superconducting properties of FeSe films on various substrates. In this letter, we report the success in synthesizing a series of high-quality single crystalline superconducting FeSe films on various substrates by PLD technique, with maximum T c up to 15 K. Besides, different growth parameters such as the ratio of Fe to Se in the targets and the thickness of the films are also elaborately tuned to arrive at a broad range of zero-resistance transition temperature T c0 . Based on abundant high quality (color online). Temperature dependence of normalized resistance R/RN for FeSe thin films with respect of various substrates, where RN corresponds to the resistance at 300 K and 20 K, respectively, and the thickness of all films are ∼ 160 nm. and stabilized samples, the relation between the crystal lattice and the tunable T c0 has been carefully studied. II. EXPERIMENTAL FeSe polycrystalline targets were fabricated by the solid-state reaction method. The original materials of Fe (4N, Alfa Aesar Inc.) and Se (5N, Alfa Aesar Inc.) powders were mixed with designed ratio of stoichiometry, then heat-treated at 420 • C for 24 hours in evacuated quartz tubes. The as-prepared material was grinded and sintered at 450 • C for 48 hours, and such process was repeated more than three times for final targets. FeSe thin films were prepared by PLD technique with a KrF laser. The background vacuum of the deposition chamber is better than 10 −7 Torr. The FeSe thin films were grown in vacuum with the target-substrate distance of ∼ 50 mm, the laser repetition of 2 Hz, and the substrate temperature of 350 • C. X-ray diffraction (XRD) measurements of the thin films were performed on a Rigaku SmartLab (9 kW) X-ray diffractometer with Ge(220) × 2 crystal monochromator. Figure 1(a) and (b) show the XRD data for the FeSe thin films grown on various substrates. In order to avoid the possible epitaxial strain from the substrates, the thicknesses of all films are above 160 nm. All XRD patterns show a fine (00l ) orientated growth. The corresponding (002) peaks demonstrate slightly leftward shift with T c increasing as shown in Fig. 1(c). The c-axis lattice parameters were calculated from θ − 2θ XRD patterns by Bragg's law, and the results will be discussed in the latter part. The full width at half maximum (FWHM) of the x-ray rocking curve is 0.515 • , showing high crystalline quality. In addition, the fine epitaxy of films is demonstrated by a clear four-fold symmetry in ϕ scan pattern, as shown in Fig. 1(e). III. RESULTS AND DISCUSSIONS First, all the films on various substrates were grown with the same thickness for a better comparison. The transport properties of films were measured in the Physical Property Measurement System (PPMS-9 T). Figure 2 shows the R − T curves for these films. Since the fabrication process is identical, the superconductivity seems to strongly depend on the substrates. In most samples, a zero-resistance transition can be observed except the films on the substrates of MAO, LSAO, and LSAT. Instead, a low-T upturn occurs in the R − T curves of films on MAO and LSAO substrates. While, the films on CF, LF and STO are worthy of more attention, where high onset T c of 15 K, 13 K and 11.5 K are respectively reached. These values are higher than the previous reports on the same substrates. [5, 14-16, 20, 23] Since the thickness is considered as a key factor for the superconductivity of thin films, especially for the ultra-thin FeSe system, [9,10,23] we study the thickness dependence effect of the FeSe/CaF 2 films with the highest T c . In Fig. 3(a), we show the temperature dependence of the resistance for the FeSe films with various thickness adjusted only by the counts of laser pulse. The superconductivity still remains as the thickness is reduced to 10 nm (about 18 UC), where T c0 = 2 K. In previous work, strong disorder was usually expected to induce a quantum transition in an ultra-thin system, which could completely suppress the superconductivity. [21,26] The existence of superconductivity in the present 10 nm film exhibits the high quality with less disorder. Increasing film thickness, the T c0 also ascends gradually and the films display a bulk-like behavior when the thickness exceeds 160 nm. Considering the composition off-stoichiometry for the superconductivity of FeSe films, namely the Fe or Se vacancy, [19] we prepared the targets with subtle adjustment in the nominal ratio of Fe:Se, including 1:1.10, 1:1.05, 1:1.03, 1:1.00, 1.00:0.99, 1.00:0.97, 1:0.95, 1:0.90, and so on. For comparison, the films were grown with the same thickness of ∼160 nm and on the same CaF 2 substrate. Figure 3(b) shows the R − T curves for films grown with different targets. T c of the films can be well tuned from below 2 K to 15 K with narrow superconducting transitions (∆T < 2 K). It should be noted that the best sample deposited by Fe:Se = 1:0.97 target shows ∆T = 1.2 K, R − T curve RRR ∼ 5, the T c = 15 K, and stable superconductivity for more than half a year, which upgrades the record of the bulk-like FeSe films grown by traditional methods. [21]By precise adjustment of target composition, it seems that the ratio of Fe to Se directly determines the superconductivity of the final films. Both energy dispersive x-ray spectra (EDX) and inductively coupled plasma atomic emission spectroscopy (ICP-AES) have been used to check the chemical composition. However, the composition between different superconducting films cannot be clearly defined by these two methods. Figure 4(a) gives the zero resistance T c0 with respect of the corresponding lattice constant c of FeSe films deposited on various substrates. There is an obvious positive correlation between T c0 and lattice constant c of FeSe films as guided by the dash line, but yet between T c0 and the surface atomic distance (d ) of the substrates (see Fig. 4(b)), indicating the strain from substrate has been relieved in the present bulklike films. We emphasize that depositing FeSe on the TiO 2 (100) substrate with rectangular lattice (b = 4.593Å, c = 2.958Å) will introduce an anisotropic epitaxial pressure. However, the FeSe/TO film still display superconductivity which is comparable with that of certain films with isotropic strain. Therefore, in comparison with composition for targets, film thickness and substrates, the c-axis lattice constant is more closely related to the superconductivity of FeSe. Comparably, the superconductivity of the multi-layered FeSe-based superconductors, i.e., (Li 1−x Fe x )OHFeSe, are observed as a similar c-axis constant dependence behavior, [27] which reinforces our understanding on the profile of lattice parameters on the superconductivity. IV. CONCLUSIONS In conclusion, we have successfully prepared highquality superconducting FeSe films on such as CaF 2 , LiF, SrTiO 3 , LaAlO 3 , TiO 2 (100), MgO, BaF 2 , MgF 2 , Nbdoped SrTiO 3 , La 0.3 Sr 0.7 Al 0.65 Ta 0.35 O 3 , (Sr,La)AlO 4 and MgAl 2 O 4 . Among them, the films on the CaF 2 substrate possess a maximum T c of 15 K, which is the highest value reported so far. By slightly adjusting the ratio of Fe to Se in the targets, a series of FeSe/CaF 2 films with tunable T c from < 2 K to 15 K are obtained. The superconductivity of the films on various substrates is found to be mainly dependent on the c-axis lattice parameter of FeSe films, in a positive correlation. However, there is no direct correlation between the T c and the surface atomic distance of substrates, therefore, it is unlikely that the strain effect plays a role in our experiments. The origin of the modification on c-axis parameter needs to be further identified, nevertheless, high-quality FeSe thin films with tunable T c may pave the way for understanding the nature of FeSe from bulk crystal to ultrathin film, and shed light on the applications of superconducting microelectronic devices, such as the hybrid Josephson junctions, single-photon detection superconducting nanowires, and so on.
2017-06-21T13:24:21.000Z
2017-06-21T00:00:00.000
{ "year": 2017, "sha1": "2fecbd88aea66c0db6e4e23956a02dd1bdbf0a1f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2fecbd88aea66c0db6e4e23956a02dd1bdbf0a1f", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
3902225
pes2o/s2orc
v3-fos-license
High frequency of dyslipidemia in children and adolescents with Down syndrome aDivisión de Pediatría, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile bDepartamento de Gastroenterología y Nutrición, División de Pediatría, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile cDepartamento de Salud Pública, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile dEscuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile eCentro UC Síndrome de Down, Santiago, Chile Introduction Down syndrome (DS) is the most frequent viable chromosomopathy worldwide, with a reported prevalence between 1 in 691 live births in the United States 1 to 1 in 400 live births in Chile 2 . People with DS have an increased risk of chronic diseases, such as overweight, obesity and dyslipidemias, which confer an increased risk of cardiovascular disease (CVD).While some authors have reported a low incidence of atherosclerotic lesions in adults with DS, possibly reducing the risk of coronary events 3 , other reports show that these patients have an approximately four times higher risk of death by ischemic heart disease and stroke in adulthood than the general population 4 . Due to advances in prevention, diagnosis and management of chronic diseases, people with DS have a longer life expectancy, increasing survival from 9 years in the first reports 5 to over 60 years nowadays 6 .The above-mentioned advances and the exposure to new environmental factors mean that the actual cardiovascular risk of these patients is currently unknown. An assertive diagnosis of dyslipidemias allows an early treatment, which mainly consists of lifestyle changes, including modifying dietary and exercise habits.In case of non-response to this first stage, drug therapy should be considered 7,8 . Nowadays, there are few studies that describe the lipid profile of population with DS, most of them are case-control studies with small sample size, and many of them in adults 9,10 .Studies conducted in pediatric population show higher rates of dyslipidemia in children with DS compared with the general pediatric population 11 , suggesting that this condition would be independent from nutritional status 12 . Previous reports on lipid profile among patients with DS show variable results, being high levels of triglycerides (TG) and low levels of high-density cholesterol particles (HDL-C) the most frequent findings [9][10][11]13 . Howver, to date, there is no proper description of lipid profile in children with DS, and if this contributes to a higher risk of CVD is still controversial. The objective of this study is to determine the frequency of dyslipidemias and to describe the lipid profile in a Chilean population of children and adolescents with DS at risk of dyslipidemia. Study design Cross-sectional study including patients with DS between the ages of 2 and 18 years, who participated in a special care program for people with DS in UC CHRISTUS Health Network, between 2007 and 2015. Study subjects Patients with DS who had a lipid profile (LP) among their routine laboratory tests.The decision of measuring LP was made by their physician, based on the presence of known risk factors of dyslipidemia, such as dyslipidemia family history, overweight or hypothyroidism (all the tests were routinely performed on an empty stomach). The worst LP available on the patient's clinical record was selected.First, considering the number of plasma lipids out of range, and second, the magnitude of the deviation from the normal range. Two researchers reviewed the clinical records and collected the relevant information at the moment of measuring LP.The following characteristics were registered: epidemiological characteristics (age, gender), family history of dyslipidemia and early CVD (reported by parents), relevant comorbidities (hypothyroidism, diabetes mellitus, celiac disease) malformations (hemodynamically significant congenital heart defects and gastrointestinal malformations), medication related to dyslipidemias development (e.g.risperidone and chemotherapy), nutritional status and puberal development. Dyslipidemias were classified as a) Single Dyslipidemia, when only one of the five lipids of the profile was abnormal, b) Combined Dyslipidemia: b1) Atherogenic Dyslipidemia (AD) when high TG and low HDL Chol were found, with TC and LDL Chol in normal range; b2) Mixed Dyslipidemia when TC and/or LDL Chol were high, with high TG and normal HDL Chol 15 . Nutritional diagnosis criteria are summarized in Table 1 16,17 .The diagnosis of short stature was considered for all ages, as Height/Age growth chart < p3, according to the growth charts for DS 17 . Dyslipidemia family history was considered as any dyslipidemia in parents or siblings, and early CVD was considered as the presence of acute myocardial infarction, treated angina, coronary heart disease interventions, stroke or sudden heart disease in father or brother before 55 years of age, or in mother or sister before 65 years of age. Ethical aspects This study was approved by the Research Ethics Committee of the School of Medicine, Pontificia Universidad Católica de Chile (registration code # 14-064).Due to the retrospective nature of the study, a waiver of consent was approved. Statistical analysis The categorical variables were described in terms of number and percentage, and numeric variables in terms of median and range.For AD, the crude association (non-dyslipidemia vs. AD) was analyzed with the following variables: gender (male vs. female), age (median and range), hypothyroidism (yes vs. no), dyslipidemia family history (yes vs. no) and nutritional diagnostic (normal weight or undernutrition vs. overweight or obesity) using Fisher's exact test.Calculations were performed with SPSS 22.0 software.All pvalues < 0.05 were considered statistically significant. Results Clinical records of 218 children and adolescents with DS were examined.Clinical characteristics of the study group are detailed in Table 2. 58% of patients (N = 127) had at least one plasma lipid out of range.The frequency of each dyslipidemia is described in Figure 1.The most frequently found dyslipidemia was low HDL Chol (15.1%), followed by AD (13.3%).Table 3 shows the values of each plasma lipid (median and range) in the group with dyslipidemia. Among the patients with dyslipidemia, only 49% (N = 62) had one plasma lipid out of range (single dyslipidemia), 26% had two, 13% had three, 9% had four, and 3% of the study population had all five plasma lipids out of range.Within the group of patients with combined dyslipidemia, the most frequent combination was low HDL Chol and high TG.Table 4 details the frequency of single and combined dyslipidemia.The group of AD patients was compared with the group without dyslipidemia, analyzing by age, gender, nutritional status, hypothyroidism, dyslipidemia family history and early CVD.When assessing by age, patients with AD were younger than healthy patients (3.91 years vs. 5.08 years, p-value = 0.006).No gender differences were observed. In patients with hypothyroidism, there was a trend towards a higher frequency of AD compared to patients without hypothyroidism, although this difference was not statistically significant (28.6% vs. 13.9%,p = 0.105).When analyzing by family history, no significant differences were observed.In relation to nutritional status, the group with overweight and obesity did not have a trend towards a higher frequency of AD. Discussion A high frequency of dyslipidemia, near to 60%, was found in our study group.This frequency is much higher than the reported in general pediatric population, which varies between 15 and 30% 15,[18][19][20] .In the single and combined dyslipidemias analysis, our results showed similar trends to previous reports 9,10,15 in which low HDL Chol, high TG, and AD were the most frequent types of dyslipidemia. Regarding to the clinic characteristics of our study group, there were higher rates of hypothyroidism (76.1%) than previous reports with rates between 17 and 35% in children with DS 21,22 .This finding could be explained by the fact that our study group was selected among patients with dyslipidemia risk factors.It is remarkable in our study that the presence of each type of dyslipidemia was not related to nutritional status, resembling the results obtained by Adelekan et al., where children with DS had a less favorable LP than their siblings regardless of nutritional status 12 .As a matter of fact, in our study, the frequency of overweight and obesity is relatively low (21.1%),probably due to a strict medical follow-up.On the other hand, it could be proposed that dyslipidemias are related to some genetic factor which influences cholesterol metabolism, rather than healthy habits such as diet and exercise. We decided to use single or combined dyslipidemias classification, due to its well described clinical characteristics.Pure hypertriglyceridemia is often related to obesity, high visceral adiposity, insulin resistance and other metabolic complications 8 , which are common in people with DS 12 . Moreover, low HDL Chol is usually due to central obesity and it is related to physical inactivity and diets low in monounsaturated fats.In some cases, although uncommon, it is related to familiar patterns [23][24][25] .It would be interesting to have more information about the dietary habits of our study group, although medical recommendations generally include a healthy diet. The high frequency of hypertriglyceridemia and low HDL Chol findings are concerning, due to their association with metabolic syndrome 26,27 and CVD 28 .In addition, AD in childhood is predictive of accelerated atherosclerosis and early cardiovascular events in adulthood 7 . The American Academy of Pediatrics, in agreement with the panel of experts of NHLBI (14), recommends the dyslipidemias universal screening, with a first LP performed in prepubertal stage (9-11 years) and a se-Dyslipidemia in Down syndrome -M.J. de la Piedra.et al cond LP between 17-21 years.However, this approach is still controversial, since multiple entities 8,[29][30][31] recommend to carry out a selective screening only in highrisk groups. It is remarkable that the condition of DS has not been considered as an independent risk factor of CVD 32 .Furthermore, to this date, clinical guidelines for health supervision in DS 33,34 do not include screening for dyslipidemias among their recommendations.For this reason, it is important to carry out new studies controlled by comorbidities and risk factors, in order to evaluate the performance of screening for dyslipidemias in this population and to consider possible preventive treatments for CVD. Our results are similar to those previously reported.First, among the strengths of our study, to our knowledge, this is the largest cohort (n = 218) that evaluates this issue.Secondly, it reports the lipid profile of an exclusively pediatric population with DS, while previous studies are mainly focused on adults.Thirdly, the use of NHLBI's dyslipidemia definitions allows a proper comparison of our results with previous publications.Additionally, the dyslipidemia classification, such as single or combined, allows to analyze the direct clinical implications of each one. On the other hand, there are some limitations.First, the retrospective nature of the analysis confers a higher risk of selection bias.The fact that our study group was selected from patients with risk factors of dyslipidemia, from a single health center, does not necessarily validate the results for the general population with DS.However, due to the high prevalence of DS in our health center, and to the fact that UC CHRISTUS Health Network is a reference center in our country, we consider that our sample is acceptable.Nevertheless, new prospective studies are needed to confirm these results. Our findings suggest that DS itself confers a higher risk of dyslipidemias, independently from the presence of comorbidities typically related to an abnormal LP, such as overweight, obesity, and hypothyroidism.It could be expected that this is related to a genetic cause, a special form of lipid metabolism 11,35 or other related conditions to DS which have not been studied so far. Although the studied population corresponds to a biased sample, the high frequency of dyslipidemia, compared to general pediatric population and the development of dyslipidemia at early ages, suggest the need for early clinical awareness in this group. This study provides insights for new research lines in this population, such as the exploration of dyslipidemias in all patients with this genetic condition.In addi-tion, long-term surveillance of these patients would be essential to assess the frequency of dyslipidemia independent from the presence of associated risk factors, and to correlate dyslipidemia and CVD, in order to clarify the risk of coronary heart disease in adults with DS.On the other hand, prospective studies would allow the development of general screening recommendations and interventions for this group. In conclusion, our study suggests that dyslipidemia screening should be performed early in all patients with DS, and that the condition of DS should be considered as an independent risk factor of developing dyslipidemia. Ethical Responsibilities Human Beings and animals protection: Disclosure the authors state that the procedures were followed according to the Declaration of Helsinki and the World Medical Association regarding human experimentation developed for the medical community. Data confidentiality: The authors state that they have followed the protocols of their Center and Local regulations on the publication of patient data. Rights to privacy and informed consent: The authors have obtained the informed consent of the patients and/or subjects referred to in the article.This document is in the possession of the correspondence author. Financial Disclosure Authors state that no economic support has been associated with the present study. Conflicts of Interest Authors declare no conflict of interest regarding the present study. Figure 1 . Figure 1.Frequency of each type of dyslipidemia.The most frequent type of dyslipidemia was low HDl Cholesterol, followed by hypertriglyceridemia. tC: total Cholesterol; lDl Chol: low Density lipoprotein Cholesterol; HDl Chol: High Density lipoprotein Cholesterol; tg: triglycerides. Table 1 . Nutritional diagnosis criteria Weight for Height Index; DS: Down Syndrome; BMI: Body Mass Index; WHO: World Health Organization; SD: Standard Deviation.*WHI= Weight for Height Index (%) calculated as (Real weight x 100)/expected weight; expected weight considered as p50 for height.†BMI = Body Mass Index.Dyslipidemia in Down syndrome -M.J. de la Piedra.et al Table 2 . Descriptive characteristics of the studied population * Pubertal development: tanner ≥ 2. † Family history of dyslipidemia or premature CVD (ie, heart attack, treated angina, interventions for coronary artery disease, stroke, or sudden cardiac disease in a male parent or sibling before 55 years of age, or a female parent or sibling before 65 years of age).‡ Congenital heart diseases that required surgery.§ Relevant medications: Chemotherapy and Risperidone. Table 4 . Frequency of single and combined dyslipidemias tC: total Cholesterol; lDl Chol: low Density lipoprotein Cholesterol; HDl Chol: High Density lipoprotein Cholesterol; tg: triglycerides.
2018-04-03T01:12:18.470Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "d9a3ead1bc482f595ded2eb8299fe245e544b1a0", "oa_license": "CCBYNC", "oa_url": "https://scielo.conicyt.cl/pdf/rcp/v88n5/en_art04.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8b6eca3d5946ec245c6daa32b3c0368f6986f267", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119256931
pes2o/s2orc
v3-fos-license
Algorithm to compute the electric field gradient tensor in ionic crystals A simple algorithm and a computational program to numerically compute the electric field gradient and the concomitant quadrupolar nuclear splitting is developed for an arbitrary ionic crystal. The calculations are performed using a point charge model. The program provides three different ways for the data input: by Bravais lattices, by lattice parameters, or by introducing any spatial structure. The program calculates the components of the electric field gradient, the asymmetry parameter and the quadrupolar splitting for a given number of nearest neighbors with respect to the nuclear charge as origin. In addition, the program allows the use of different Sternheimer antishielding factors. Introduction The electrostatic energy W due to the interaction of a nuclear charge distribution ρ(r) and the electrostatic potential V (r) generated by its electric environment is given by where d 3 τ is the volume element and r = (x 1 , x 2 , x 3 ) are spatial coordinates. The integral is calculated over the nucleus volume. A suitable way to evaluate it is to make a multipole expansion of the electrostatic potential V (r) around the center of charge of the nucleus as origin, assuming that V (r) is a slowly varying function over the nuclear region. Expanding in a Taylor serie around the nucleus center of charge, one obtains [1,2] V (r) = V (0) + r · (∇V ) r=0 + 1 2 i j The relevant terms in this expansion are the first and third terms, due to the fact that the second one is zero 1 because, when multiplied by the nuclear charge, it represents the interaction of the nuclear dipole moment (which is zero) with the external electric field, E. The next non-zero terms are several orders of magnitude smaller than the third one [3] so, in a good approximation, the interaction energy can be expressed as where V jk are the electric field gradient (EFG) tensor components, and Q ij are the quadrupolar nuclear moment components; both are second rank tensors. Choosing a principal axis system for the EFG tensor, the interaction energy can be expressed as the sum of two terms The first term, called isomer shift, represents the effect due to the nucleus size. The second one corresponds to the so called quadrupolar nuclear splitting ∆Q, so the interaction hamiltonian between the nuclear quadrupolar momentQ and the EFG tensor ∇ E, with respect to an arbitrary axes system with origin in the nuclear charge centroid, is given by where ⊗ denotes tensorial product. Considering the z axis along the largest component of the EFG (V zz = eq) and the Laplace equation, the hamiltonian (5) is transformed through the Wigner-Eckart theorem [4] into where I 2 , I x , I y and I z are the nuclear spin magnitudes, and is the so called asymmetry parameter, which indicates how much the electric potential departs from spherical symmetry. An analytical solution of (6) can only be obtained for the I = 3 /2 case [6] . By far, the most used isotope in Mössbauer spectroscopy is 57 Fe, for which the useful transition is I = 3 /2 → I = 1 /2 and, in what follows, we will restrict to this case. The analytical solution is: That is, the nuclear I = 3 /2 energy level is split into two levels (± 3 /2 and ± 1 /2) and the ground level I = 1 /2 stays degenerated. This gives rise to two absorption lines in the Mössbauer spectrum separated by an energy which is called the quadrupolar nuclear splitting. Electric Gradient Tensor In rectangular coordinates, the EFG of a set of n point charges is: where q k and r k = (x 1 k , x 2 k , x 3 k ) are the respective charge and position of the k th ion. The electric interaction of the nucleus with its surroundings has two different origins: the charge density of the electrons of the nucleus under study, and the ligands of the crystal lattice [5,6] . For ionic crystals, the main contribution to the EFG comes from those ions directly coordinated to the nucleus. Considering that the interatomic distances in a crystal are pretty much larger than the displacements due to the ion vibrations, a good approximation to the EFG can be done using equation (10) with a point charge model. The contribution to the EFG due to the electronic distribution requires a complex calculation that involves not only the knowledge of the electronic wave functions of the atom, but also shielding and antishielding effects and polarization of the electronic distribution due to other charges near by. This type of calculations retreat from the scope of this work, so their effect will be taken into account trough two parameters R and γ ∞ , known as the shielding and antishielding Sternheimer factors [7][8][9][10][11] , to obtain The first term in equation (11) corresponds to the ligand contribution, while the second one constitutes the valence contribution. However, in ionic crystals the last one can be neglected. The following program is developed for this case. Structure of the computational program The program was focused as an useful tool in a Mössbauer spectroscopy laboratory, so it computes the components of the EFG tensor and the quadrupolar splitting for a 57 Fe nucleus by default. However, it is able to work with any nucleus, just by introducing its respective quadrupolar moment value. In order to compute the EFG in a great number of crystalline lattices, three different input data modes were developed, allowing for a wide range of applications. Those modes are briefly described here: • N arbitrary ions: In this section of the program, the coordinates and valences of each ligand constituting the crystalline array are inputted manually. The spatial distribution of the ions can be totally arbitrary. The algorithm can handle a number of ions as large as necessary, being this number, of course, finite. • Bravais lattices: Here, an election of one of the fourteen possible Bravais Lattices in three dimensions is made, just by introducing the parameter(s) that define such lattice. The program allows to select the place in the lattice in which the EFG will be computed. • Lattice parameters: When available, one can input the values of the lattice parameters, so the program identifies the respective lattice, reconstructing it in order to carry out the computations. The program was developed in a structured computational language, so there is a main module calling different functions and subroutines. Functions There are three functions defined in the program. R The function R calculates the euclidean rectangular distance between the i th ligand coordinates and the nucleus under study taken as the origin. V This function calculates the V xixi component of the EFG tensor in principal axes for the i th ion with equation (10), for each value of k, and where q k is the valence charge of the ligand, q lig . DQ This function computes the value of the quadrupolar splitting as a function of V zz and the asymmetry parameter, through equations (9) and (11), leaving the result in terms of the (1 − γ ∞ ) factor, without considering the valence contribution in the total charge. Main module Here, the physical constants to be used in the program are defined. It establishes the value of the quadrupolar moment Q to be used 2 , and the ionic configuration to be worked out, which are: N arbitrary ions • Introduce the number N of ligands to be considered in the computation. • Introduce the three coordinates (in angstroms) and the valence of each ion. • The distance to the origin is computed for each ion, via the R function. • The components of the EFG are calculated, adding 3 to each one the contribution of each ion through function V . • The largest component of the EFG is assigned to |V zz |, and |V yy | ≥ |V xx |. • The asymmetry parameter is computed with equation (7). • The value of the quadrupolar splitting is computed with the function DQ. • The results of the EFG components, the asymmetry parameter and the quadrupolar splitting are shown on screen and saved in a file. Bravais Lattices • Choose one of the seven possible groups in three dimensions. • Select the lattice to be taken into account 4 and introduce the parameter(s) that define it. Then choose the number of nearest neighbors to be deemed, the valence of each layer of neighbors, and the position in the structure in which the EFG is to be computed (in the center or the vertex of the structure). • With this information, the program reckons the coordinates of the ligands in the lattice, through the algorithm presented in the next section. Once the coordinates are determined, the distance of each ligand to the origin is computed. In order to identify and count the layers of nearest neighbors, the information of all the generated neighbors is ordered and displayed in growing distances to the origin 5 . The components of the EFG are calculated for the chosen layers of neighbors. • The largest component of the EFG assigned to |V zz |, and |V yy | ≥ |V xx |. • The asymmetry parameter is computed with equation (7). • The value of the quadrupolar splitting is reckoned with the function DQ. • The results of the EFG components, the asymmetry parameter and the quadrupolar splitting are shown on screen and saved in a file. • The program identifies the lattice that corresponds with the lattice parameters introduced, and with the information of the lattice, the program proceeds in the same form than in the previous section, Bravais lattices. 2 The program uses the value Q = 16b of the quadrupolar nuclear moment for the 57 Fe, recently reported by Martínez-Pinedo et al [12] . 3 Due to the superposition principle of the electric potential. 4 If possible, choose between the simple, the body centered, the two faces centered or the face centered correspondent structure. 5 The ordering process is carried out with a bubble ordering algorithm, which does not compromises the efficiency of the program because the lists of neighbors to be ordered are not generally too large in standard calculations in solid state. Algorithm In what follows, the main algorithm used by the sections Bravais lattices and lattice parameters to find the points in the lattice where the ligands are to be considered for the computations, is described: a) Select the number of nearest neighbors to be considered. b) • If all the neighbors have the same valence, introduce it. • If not, introduce the valence of each layer of neighbors. c) Choose to compute the EFG in the center or in the vertex of the structure. d) With the six lattice parameters (a, b, c) y (α, β, γ), the rectangular components of the crystallographic axes are calculated through the next transformation equations, obtained in the appendix Appendix A If the studied nucleus is centered in the body, the coordinates of the ligands are calculated as follows: • If the structure is simple (SC, ST, SO, SM, triclinic, trigonal or hexagonal) 6 , the coordinates of the i th ion are computed through equations (13), where the numbers n 1 , n 2 and n 3 are whole numbers in the interval [−m, m] 7 . • If the structure is body centered (BCC, BCT or BCO) 6 , the coordinates of the i th ion are computed through equations (14), the same way as in simple structures, but excluding the point (0, 0, 0). 7 The number m (chosen in the step a)) depends on the number of nearest neighbors to be considered in the computation. For example, m = 2 is enough to find the third nearest neighbors. • If the structure is face centered (FCC or FCO) 6 , the coordinates of the i th ion are computed through equations (15), (16) and (17) for the ligands in the faces parallel to the crystallographic planes, where the numbers n 1 , n 2 y n 3 are whole numbers in the interval [−m, m] 7 and such that n 1 y n 2 can not be zero. • If the structure is two face centered (2FCO or 2FCM) 6 , the coordinates of the i th ion are computed through equations (18), where the numbers n 1 , n 2 y n 3 are whole numbers in the interval [−m, m] 7 and such that n 3 can not be zero. If the studied nucleus is centered in the vertex of the structure, the coordinates of the ligands are computed as follows: • If the structure is simple (SC, ST, SO, SM, triclinic, trigonal or hexagonal) 6 , the coordinates of the i th ion are computed through equations (14), excluding the point (0, 0, 0). • If the structure is body centered (BCC, BCT or BCO) 6 , the coordinates of the i th ion are computed through equations (19), the same way as in simple structures, excluding the point (0, 0, 0). • If the structure is face centered (FCC or FCO) 6 , the coordinates of the i th ion are computed through equations (15), (16) and (17) for the ligands in the faces parallel to the crystallographic planes, where the numbers n 1 , n 2 and n 3 are whole numbers in the interval [−m, m] 7 and such that n 1 y n 2 can not be zero. • If the structure is two face centered (2FCO or 2FCM) 6 , the coordinates of the i th ion are computed through equations (20), where the numbers n 1 , n 2 y n 3 are whole numbers in the interval [−m, m] 7 and such that n 1 and n 2 can not be zero. The distance to the origin is computed for each ion, via the R function. h) The ligands are ordered in growing distances to the origin 5 . i) The number of ions in each layer of the nearest neighbors is counted 8 . j) The valences introduced in the step b) are assigned to each layer of the nearest neighbors computed previously. k) The components of the EFG are computed, adding 5 to each one the contribution of each ion through function V . It is important to point out that in our calculations the contribution of the Sternheimer factors to the EFG has not been taken into account, so the calculated values will be normally smaller than the experimental values. However, the purpose of these calculations is to endow a guide to discriminate between different site environments of the iron nucleus through the relative magnitudes of the calculated EFG [13] . Conclusions The algorithm and the program developed here are useful as a high applicability tool in both experimental spectroscopy and in any theoretical research in solid state and crystallography, requiring this kind of computations. The program presented in this work is extremely versatile and friendly with the final user, and only requires the structural information of the system under study. In spite of the fact that the EFG tensor and its quadrupolar splitting computations are based in a simple point charge model, disregarding the valence contribution, it is a useful tool to discriminate the different structures present in complex ionic systems. The program was written in Fortran 77, to assure high compatibility across different platforms, but the structure of the program and the algorithm are equally useful if the program is written in any other structured computational language, like C, Phyton, Pascal, etc., without compromising its accuracy and stability. Of course, the program can be improved including the effect of the electronic density in the EFG tensor, but that matter may be explored in a future work. However, the computation of the shielding and antishielding factors is complicated, so their effect has to be taken into account with empirical adjustments, without changing significantly the relative magnitudes of the quadrupolar splittings. Appendix A. Transformation from crystallographic to rectangular coordinates Consider the crystallographic axesā,b andc, and the rectangular onesx,ȳ andz, as shown in figure A.1. The crystallographic axes can be expressed in the cartesian base, as:
2011-06-30T21:21:43.000Z
2011-06-30T00:00:00.000
{ "year": 2011, "sha1": "6fa6c7bcc27094784d3dc53e5be2b30193eb9a3f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6fa6c7bcc27094784d3dc53e5be2b30193eb9a3f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218895210
pes2o/s2orc
v3-fos-license
Design of Automatic Lung Nodule Detection System Based on Multi-Scene Deep Learning Framework , I. INTRODUCTION Lung cancer is one of the world's most dangerous cancers. However, the early detection significantly increases the survival rate of lung cancer [1]. The cells in the lung have small growths of cancer (malignant) and non-cancerous (benign) lung nodules [2], [3]. For essential prognosis, the early diagnosis of malignant lung nodules is important. Cancer lung nodules in the early stage are very similar to non-cancer nodules and need to be differentially identified based on small morphological variations, positions, and medical biomarkers [4]. The difficult task is for the early lung nodules to calculate the risk of malignancy. A computer tomography scan (CT) (morphological evaluation), Positron Emissions Tomography (PET), and needle prick biopsy exam is used in many diagnostic procedures [5] in connection with early diagnosis of malignant lung nodules in clinical settings [6], [7]. Moreover, medical practitioners are primarily using invasive approaches such as biopsies or operations for differentiating benign from malignant lung nodules [8]. Invasive methods The associate editor coordinating the review of this manuscript and approving it for publication was Wei Wei . involve many risks and raise the anxiety of patients for such a delicate and sensitive organ [9]. Computing tomography (CT) imaging is the best approach for analyzing lung diseases [10]. In addition, the incidence of CT scans with false positive in cancer radiation results is substantially less than the normal CT dose, due to the low dosage of CT [11]. Results show no significant differences in detection sensitivities between low and normal CT images. In the majority of the population identified with low dose CT scans relative to the Chest X-rays the national lung screening tests (NLST) for deaths from cancer have significantly reduced [12]. Advanced clinical information (thinner slices) and advanced imaging techniques improve the identification sensitivity of lung nodules [13], [14]. Moreover, this significantly increases the datasets. In one scan up to 500 sections/slices are generated according to the thickness of the slice. It takes 2-3, 5 minutes for a trained radiologist to test a single slice [15]. A radiologist's workload increases significantly in order to detect the potential presence of a nodule [16], [17]. Results show only a 68% accurate diagnosis of lung nodules when only a single X-ray specialist conducts the analysis and with both radiologists up to 82% of the time. Early detection of the lung nodule is cancer for radiologists is a very challenging, time-consuming and repetitive task. The radiologist requires a great deal of time to view a number of scans with precision, while the identification of small nodules is very error-prone. The benign and malignant nodules have a significant overlap in their characteristics [18] and must be identified at an early stage of morphology ( Figure 1). Normally, benign nodules are located on the periphery with smooth surfaces and triangular forms [19] whereas malignant nodules may show limits of speculation on lobular, vascular, cystitis, pleural and sub-solid morphological boundaries [20]. [21]. In addition, this study aimed to use 2D CT multi-scene images for the identification of nodules. Initially, a pitch analysis approach developed for this study resolved the distortions in lung contours induced by a juxta-pleurative nodule. Secondly, the vessel design in lung CT images has been removed with adjustable parameters. A new framework of deep learning, CNN, has been eventually developed to teach radiologists to know the two image scenes. Scene 1 consists of the original CT images, and scene 2 is made of binary images produced by complex scene 1. Throughout this way, several pairs of sub-images from an original image (Scene 1) can be compared with the binary image (Scene 2) and the sample image has been taken from https://www.ncbi.nlm. nih.gov/pmc/articles/PMC3041807/. The sub-image pairs were only images that have been captured by the CNN model This CAD system for the identification of lung nodule performs very outstanding and saves time and space for storage. II. SURVEY ABOUT PREVIOUS WORKS A label in a Lung CT image relates to a radiological result that suggests an abnormal disease. Analysis of CT signs helps to understand the clinical source of the lesion. A detailed study of the classification of lung nodules with various CT indications helps to make it simpler and more accurate for benign and malignant nodules. Towards this purpose, Zheng et al., [22] proposed a method of classification of pulmonary nodule diagnosis based on a finish-up module with various nodular signs. Initially, they create a CNN classifier that adopts and retrains the Inception modules on Image Net. The pre-trained classification system will be completed by 10 different pulmonary sampling signals and these 10 classifiers were combined with the immune system artificial algorithm. Overall sensitivity, precision, and accuracy were proposed by Inception Network Fusion (INF) algorithms significantly higher than alternative approaches of bagging and boosting. Rodrigues et al., [23] suggested the approach of systematical co-occurrence matrix (SCM) classifying nodes as malignant nodules or benign nodules. The SCM technique was applied to eliminate nodular image characteristics and classify them as malignant or benign nodules and their malignancy. Data on nodule locations and malignancy rates are given in the computed tomography analysis of the pulmonary imaging and image database tools initiative. The SCM is implemented in four filters, in particular, medium, Laplace, Gaussian and Sobel, both grayscale and Hounsfield. Xie et al., [24] proposed to use minimum Chest CT data, a multi-visual, collaborative, and Knowledge-based (MV-KBC) model to differentiate malignant from benign nodules. This design develops the characteristics of the 3-D pulmonary nodule through the decomposition into 9 fixed views. For each view, they established a knowledge-based collaborative sub-model (KBC) where three types of image patches are built for fine-tuning three pre-trained ResNet-50 networks, each with its overall design, voxel and shape heterogeneity. The penalty loss feature is used to reduce the false-negative rate better, which minimizes the efficiency of the MV-KBC model overall. They checked and compared their system on the LIDC-IDRI standard and the 5 most advanced approaches to classification. Chung et al., [25] proposed a method of lung segmentation to reduce the problem of the juxta-pleural nodule, a popular challenge in the applications. Initially, they used Chan-Vese (CV) model for active contours and followed the Bayesian approach based on the results of a CV model that predicts lung image in an earlier frame or the neighboring image on the basis of the segmented lung contour. False positives were removed by the concave detection of points and Hough circle/ellipse was the resulting candidates. Eventually, the lung contour was changed by applying candidates from the final nodule to the results of the CV model. The high precision with the juxtapleural detection of the pulmonary nodule would support any computer-aided diagnosis system using lung segmentation as such an essential step [26]- [28]. In order to overcome the above surveys, this paper suggests an efficient identification system for lung nodules based on VOLUME 8, 2020 Multi-Scene Deep Learning Framework (MSDLF) by the vesselness filter. III. METHODOLOGY A. PREPARATION OF DATA SET The LIDC / IDRI are a public data set. A series of 1018 CT scans are gathered in this database from 1010 patients. Radiologists involved actively in the LIDC / IDRI database nodule classification. The findings of the first phase assessments are collectively implemented in the second phase by each radiologist independently. The specimens were examined again by each radiologist and the same observations were made again. Therefore, in every case of the LIDC / IDRI database, the annotations of these radiologists have an XML file. Such CT images are stored as one of the common medical standards in the DICOM format and can be checked by some image parameters for each lung CT image, with slices and pixels spacing. Once Lung CT images are obtained by various imaging devices in the LIDC / IDRI database, it normalized every pixel distance to identify 2D nodules actual dimensions. The uniform pixel length is set at 0.699 mm, which reflects the pixel length average value. Two nodules are classified as radiological records: Small nodules less than 3 mm in diameter and broad nodules greater than 3 mm in diameter. Since small nodules only have a central point label and are effectively corroded by picture filters, this conceptual model is not suitable for identifying small nodules. Consequently, the principal purpose of this analysis is to define large nodules (> 3 mm) which in subsequent parts are immediately proven. This research used 1006 scans with XML annotation files. Since data in this XML file is not easily accessible, it has created a system for data storage for annotations of the radiologist. There are a variety of detection nodules and pathological features for each radiologist (IDs). The details of such a framework are given in Figure 2. Therefore, XML files were converted to a variety of patient data structures that can be quickly extracted and coded for experiments. Furthermore, the study took level 1, which identified a nodule noted by at least 1 X-radiologist, as a real nodule to identify the true nodules for the validity of the system as golden standards. Figure. 3(a) displays a circular area consisting of machine tools, arms, spine, pulmonary lobes, fabric, etc., although a non-imaging region is located outside the circle. An automated segmentation scheme suggested by the study is implemented in four steps in order to extract the perfect lung parenchyma. The lung parenchyma segmentation process is illustrated in Figure 4. B. MENDING OF LUNG CONTOUR AND SEGMENTATION OF PARENCHYMA Histogram is used in threshold segmentation to obtain the probability distribution of different gray levels. This probability distribution is determined by equation (1). The patient's data storage system. (1) l represents the total number of scan image categories. Q j and q j (y) are probability and probability distribution functions of category j. N j is a mean and ρ j is a standard variation. The total probability error is minimized by equation 2 for both different categories. It is used for the optimal threshold calculation. (2) This error relates to the W j threshold and it is determined as per Equation 3. A threshold image is now generated with that of the lung mask. In the first level, the luminosity and extra thoracic area in the CT images are considerably darker than chest regions. The local maximum variation between clusters thus emphasizes thoracic shape. The second step in this work is the study of the related field (4 or 8 neighborhood) and the anatomical method, along with the technique to open and close the removal of lung noise, for example, The esophagus in combination with lung type and position typical sense. A padding operator is used at the same time to fill the bubble within the lungs and to improve its contours with the area substantially below the lung lobes. The third step of this study is to establish a fast method for lung-contour mending for juxta-pleural nodules near the thorax wall that are lung tumors. Juxta-pleural nodules have been easily ignored and are considered likely to be areas outside of the lung compared with other nodule types. Regarding this issue, two sizes of patches are used for scanning the lung contour using a block of 32 × 32 pixels and a block of 16 × 16 pixels per row (Figure 3(b-d)). The main focus of this work was to correct the relatively smooth lung outline and the juxta-pleural nodules to the smooth lung outlines owing to the incidence of major cases. The change in the lung contour slope is measured using the difference method within each patch and determines whether the positions have to be addressed immediately. The boundary could be fixed automatically, depending on the position within the pulmonary, after a juxta-pleural nodule has been found. In the last step, the corrective contour of the lung parenchyma is segmented. C. THE REMOVAL OF THE VESSEL A large amount of tissue, polyps, veins or other contaminants is found in the lung parenchyma. Recently vascular morphology has been visible in the lung that it can affect the lung polyp identification. The images of lung CT show both cells and polyps in the similar dark pixels as luminous tissues. But the anatomy and composition of the vessels distinctly varied from the lung nodules. Tubular forms generally looked like tubes as ellipses, small circles, or cloth-like structures emerged lung nodules. The step seeks to replace the vein systems in the lung in a way that helps to examine the nodule structures. Throughout recent years, several sophisticated vessel algorithms, Different filters were suggested for Sato, Vessel Improving Diffusion (VED) and a Vessel filter. The primary use of these Vessel filters is retinal with decent performance. One image I (x, y) is shown in the field of a given point p(x 0 , y 0 ) as a Taylor series. The Taylor series's second-order concept included the Hessian matrix of I (x, y), which is defined as H p,σ .p(x 0 , y 0 ) is indicated as p, and σ is referred to as the Gaussian kernel G(x, y) size, which is expressed as Eq (5). Thus (4) and (5), as shown at the bottom of the next page, will measure H p,σ . H p,σ is a matrix composed of the I (x, y) and the G(x, y) derivative of the Second-order in addition to x or y. The frequency of H p,σ and the proper vector are both measured on the σ scale, which is referred to as λ k and µ k (k = 1, 2). Two specific values (λ 1 ) and (λ 2 ) represent different detection mechanisms for a two-dimensional (2D) image. The path along the vessel is µ 1 and the path of orthogon µ 2 is µ 1 . In discrimination against the local vascular preference, λ 1 and λ 2 plays an important role. In (6), both of the eigenvalues (λ 1 ) and (λ 2 ) indicate the vessel's calculation structure. Therefore the β 1 and β 2 parameters are dynamic limits that can change the sensitivity of filters to |λ 1 /λ 2 | and 2-norm (λ 1 , λ 2 ) . This research was aimed at removing the vessel connections, but not at improving the vessels in lung CT images. The vessel removal method was therefore developed in two steps. Firstly, through the vesselness filter, the image of the vessel design was created under the optimum scale σ . Second, from the original pulmonary file, the image of the vessel structure produced has removed and the vessel VOLUME 8, 2020 has deleted. The efficiency of the vessel removal of three cases is shown in Figure 5. In vessel separation, the artery removed images are analyzed by the reference threshold system (Figure 6(a)). See that Figure 6. (c) has less suspect nodules than Figure 6.(b) so that the removal process of this vessel has the following advantages, 1. The treatment decreases the number and the nodule detection rate of the suspicious areas to be checked later. 2. It is possible to reduce the number of false positives. D. THE DATA SET STANDARDIZATION This paper developed a system for normalizing data in order to capture a larger data set as shown in the following Figure(7). Certainly, Nodules in images of vascular systems will easily be seen by radiologists. In order to conserve and remove the areas of the assumed nodule lesions, both original and binary images of the vessels can be analyzed (Figure 7). E. DESIGN OF THE CNN DESIGN Usually, the CNN architecture comprises a convolutional layer, a pooling layer, and a fully integrated layer. The unique benefit of ReLU is that the error rate is more efficient than other activation mechanisms. It is described as where the symbol x j and b i represent the input and output map respectively and the * symbol denotes the convolution operation. In addition, the j th input map and i th output map are the k ji kernels. b i is the distinctiveness of the output map i th . During the training process both k ji and b i parameters can be learned. The MC pool can be shown as j th -input map, i th -output map, convolution kernels-k ji , b i -bias where x is the map of input and y i is the map of output. The size of x is L × LMP implies the max-pooling process and the size of MP (x) is (L/2) × (L/2). CP is the center pool, meaning that the middle area of the CP I (x) is separated from x and the size of CP I (x) is (L/2)×(L/2). Similarly, CP i (x) is the central area of CP i−1 (x) and therefore the size of CP 2 (x) is (L/4) × (L/4). Figure 8 will demonstrate the method of the MC pooling procedure. G (x, y) = 1 2π σ 2 . exp (− (x, y) 2 /2σ 2 ) (5) 90384 VOLUME 8, 2020 The role of softmax registration loss as shown by stochastic history downward gradient propagation as where x i is the efficiency of the network, and C is the class value set to 2. The four different patches split the nodular measurements into four different dimensions that can improve recognition precision by experimentation. Every input channel is the same location as the original image, taken from a couple of images. Further details have been demonstrated as an effective method for enhancing nodule detection. F. SEGMENTATION AND CLASSIFICATION Class 1 identified two classes of image data. Class 2 contained discrete images cut from the lung images removed from the vessel which has been given in figure 9. The size differences can affect detection precision, provided that the nodules varied from about 3 mm to 30 mm, which were measured by four X-rays. Every nodular stage has a patented patch identification window, a secondary image of the first pulmonary image. It helps us to split the related images and create them into classes 1 and 2. The scale of the subsidized (actual size and pixel dimension) is noted. Moreover, various nodule amounts are shown in Figure 10. The a, b, c modules are graded respectively in grades 1, 2, 3 and 4. The first row was the lung image processing which included radiologists identified nodule locations. The second row of threshold segmentation for the first row. Due to a small pixel gap between background and object, the threshold of OTSU had little effect on object segmentation, in the particular nodule. This study was therefore used to look for a preferred threshold value as described in the below equation (11) In (11), the P i reflects the pixel size of the lung parenchyma. N is the cumulative lung parenchyma image and λ is a parameter customizable. When λ was between 1/2 and 2/3, the segmentation performance was beneficial. Then it Disables vibration and visual objects and retains the nodules big structures that have been described in the following equation (12). where L (I , σ ) is the result of the inter-image convolution of I and σ and at a position p. The second series of image I at σ scale is determined by the voxel x which is given in equation (13) Since the chest CT scans contain nodules with a large variation, the response is specified across a number of σ 's, with each increasing the nodules of a specific size. The final result is determined as the sum of the results from the measures under consideration. This is the cumulative response for every voxel estimated as follows: where j is a σ scale and n is an overall's number. The final step is a simple threshold to pick voice with high response as it receives the improved structures in which the nodules are clearly identifiable and thus establish masks by attaching the final segmentation nodule. Thresholds were collected scientifically by measuring in training sets to create a mask which would include the area inhabited by the nodule as accurately as possible. Using the greater (λ 3 ) and the smallest (λ 1 ) own values of the Hessian equation, the formula emerges from the main curve of σ = 1 as follows: Eventually, all SI and CV thresholds are designed for the segmentation masks. The final segmentations are known to be voxels that fulfill all criteria as given below The procedure is often observed because the chosen scale cannot be adapted to the size of the nodule, especially for large nodules. After segmentation, the characteristics must be identified accurately for the diagnosis of cancer in the lung region. The texture is used for normal and abnormal pattern recognition. The texture is an image surface modification and fluctuation. G. NORMALIZED SPHERICAL SAMPLING The important step in our process is a sampling. It is often known that nodule forms are not only associated with distributions of nodule size but also with anatomical structures surrounding nodules. The sampling algorithm is shown in algorithm.1. In the above algorithm (1), the scheme is defined as the normalized spherical sample. It considers projection quality as the number of abnormal variations on the projected image, i.e. variation modifications. While all opinions are taken on their significance, another important difference is made from the features of the imaging system for a common vision and scientific image analysis. While a great deal of effort was undertaken to identify and classify lung nodules, only a few studies focused on the detection of nodule types. However, comparing methods is a necessary and reasonable step in validating our method's effectiveness. Based on the above process, the proposed work further evaluated as follows. IV. NUMERICAL RESULTS AND DISCUSSION Through the combination of Scene 1 and 2 referred to as Scene 1&2, we will certainly improve the performance of detection. This analysis has tested the effectiveness of Scene 1&2 and Scene 1 in demonstrating the effectiveness of Scene 1&2. Figure 11(a) to (d) shows the ROC curves after 10-fold cross-validations from the true positives (sensitivity) rating (specificity) to the false positive stage. During this analysis, the nodules are not categorized based on their aspect in the application of the nodule detection strategy by other investigators. The analysis has shown that the separate detection of different nodular sizes (levels 1, 2, 3, 4) could be useful. The study is performed on the Q. Zhang, X. Kong: Design of Automatic Lung Nodule Detection System Based on MSDLF However, the number of non-nodule sub-images in the scene 1&2 is approximately 300,000. It can see that smaller nodule sizes have more than large nodule sizes. Although nodules can be classified according to their status in the lung into four different types, it can only hardly help the detection of a nodule, but it can help to classify a nodule. In this experiment, two ways of detecting nodules are tried and their performance has been compared. One strategy, like other researchers, has been to identify nodules without classification ((single level). In an additional way, four separate detections (multilevel detection) are combined under four different levels of nodules provided by this analysis. In order to determine the detection efficiency of methods, this analysis used the Free Response Receiver Characteristic (FROC). When nodules have been identified these have been verified correctly, according to the annotations of radiologists. It combines the detection results of four rates with 10-fold cross-validation. Figure 13 shows the FROC nodule detection curves under the levels 1, 2, 3, 4. Overall, the result has been the same. However, as the level increases, the less the false-positive scans show the different distributions of false-positive at the four levels of the FROC curves. It is therefore essential that it separate four different nodule levels to help throughout the identification of the nodule. In Figure 14, the final FROC curves and the unclassified Nodule detection can be shown, thus classifying nodule detection minimizing the false-positive results around the entire dataset. As shown in Table 1, further comparative analysis is listed to demonstrate the improved efficiency of this methodology. The whole LIDC / IDRI have been used for this analysis. Thus, the lung nodules with less 3D information are abandoned. In addition, it agreed to Level 3 for the concept of a lung nodule, which means that at least 3 radiologists have to consider a lung nodule. In this study, lung nodules can be detected correctly. The proposed Multi-Scene Deep Learning Framework (MSDLF) by the vesselness filter has better efficiency when compared to other previous methods such as INF, SCM, MV-KBC, CV, MSDLF. V. CONCLUSION This study suggested an innovative and efficient automatic pulmonary detection system, which can reduce false positives extremely. Therefore, an automated lung wall mending mechanism is developed in this analysis to avoid missing a juxta-pleural nodule. Another significant preprocessing method in this work has been to remove the vascular disease which can illustrate nodules and weak vessels. To determine nodules accurately and quickly, four different types of CNN structures have been used and it is based on four nodule levels. In addition, the input of the CNN model is two classes of candidate nodules, containing pairs of images (Scene 1&2). Since the expertise of radiologists has been constantly increasing and improving, computer-aided systems must continue to learn from them. The automated nodule detection system for this study is used to help radiologists greatly improve detection accuracy when precious contextual relevance information from the large data is discovered.
2020-05-21T00:06:55.714Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "c8aa2dd4e6ddfb80666080f7e17851c949e1d0bc", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09091120.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "852e9792e255ece47f52f5bd6cedcd5f93a3b70c", "s2fieldsofstudy": [ "Computer Science", "Medicine", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
48361375
pes2o/s2orc
v3-fos-license
Tropical Abstractions of Max-Plus-Linear Systems This paper describes the development of finite abstractions of Max-Plus-Linear (MPL) systems using tropical operations. The idea of tropical abstraction is inspired by the fact that an MPL system is a discrete-event model updating its state with operations in the tropical algebra. The abstract model is a finite-state transition system: we show that the abstract states can be generated by operations on the tropical algebra, and that the generation of transitions can be established by tropical multiplications of matrices. The complexity of the algorithms based on tropical algebra is discussed and their performance is tested on a numerical benchmark against an existing alternative abstraction approach. Introduction Tropical mathematics is a rapidly growing subject since it was firstly introduced [1]. It has branches in mathematical fields such as tropical geometry [2] and tropical algebra [1]. The latter denotes an algebraic structure that uses max or min for addition and + for multiplication, respectively -hence, it is well known as max-plus or min-plus algebra. In this paper, we use the former operation to define the tropical algebra. A class of discrete-event system (DES) based on tropical algebra is the Max-Plus-Linear (MPL) one [3]. Models of MPL systems involve tropical operations, namely max and +. The state space of these models represents the timing of events that are synchronised over the max-plus algebra. This means that the next event will occur right after the last of the previous events has finished. The application of MPL systems is significantly found on models where time variable is essential such as transportation networks [4], scheduling [5], and manufacturing [6]. Another MPL application deals with biological systems [7]. Formal abstractions denote a set of techniques to generate abstract versions of large or even infinite models [8]. This results in less complex abstract models, which allow to replace the analysis of the original or concrete ones with automated and scalable techniques. The abstract states and abstract transitions are generated based on a so-called abstraction function. Often the relation between concrete and abstract model can be formalised by the notion of simulation [8]. Finite abstractions of MPL system have been firstly introduced in [9]. These abstraction procedures start by transforming a given MPL system into a Piece-Wise Affine (PWA) model [10]. The PWA model is characterised by several domains (partitions, or PWA regions) and corresponding affine dynamics. The resulting abstract states are the partitions corresponding to the PWA regions. Finally, the transition relation between pairs of abstract states depends on the trajectory of the original MPL system. This abstraction technique enables one to perform model checking over an MPL system; one of the applications is safety analysis [9]. Interested readers are referred to [9,11,12] and the VeriSiMPL toolbox [13]. This paper introduces the idea of Tropical Abstractions of MPL systems. The approach is inspired by the fact that an MPL system is a DES that is natively updated via tropical operations. We will show that the abstraction of MPL systems can be established by tropical operations and with algorithms exclusively based on tropical algebra. We argue by experiments that this has clear computational benefits on existing abstraction techniques. The paper is outlined as follows. Section 2 is divided into three parts. The first part explains the basic of MPL systems including the properties of its state matrix. We introduce the notion of region matrix and of its conjugate, which play a significant role in the abstraction procedures. The notion of definite form and its generalisation are explained in the second part. Finally, we introduce a new definition of Difference Bound Matrices (DBM) [14]. Equipped with these notions, all algorithms of the tropical abstraction procedure are explained in Section 3. In particular, we prove that the the resulting PWA regions characterised by the MPL system are equivalent to the definite form of the state matrix. We also show that both computation of image and inverse image can be established with tropical matrix multiplications w.r.t. the region matrix and its conjugate -this is later used for reachability analysis (forward and backward). The comparison of the algorithms performance against the state of the art is presented in Section 4. The paper is concluded with Section 5. The proofs of the results are in the Appendix. Models and Preliminaries This section discusses the notion of Max-Plus Linear systems [3] and the definite form of tropical matrices [15], then it introduces the concept of Difference-Bound Matrices (DBM) as tropical matrices. Max-Plus-Linear Systems In tropical algebra, R max is defined as R ∪ {−∞}. This set is equipped with two binary operations, ⊕ and ⊗, where a ⊕ b := max{a, b} and a ⊗ b := a + b, for all a, b ∈ R max . The algebraic structure (R max , ⊕, ⊗) is a semiring with ε := −∞ and e := 0 as the null and unit element, respectively [3]. The notation R m×n max represents the set of m × n tropical matrices whose elements are in R max . Tropical operations can be extended to matrices as follows. for all i, j in the corresponding dimension. Given a natural number m, the tropical power of A ∈ R n×n max is denoted by A ⊗m and corresponds to A ⊗ . . . ⊗ A (m times). As we find in standard algebra, the zero power A ⊗0 is an n × n identity matrix I n , where all diagonals and non-diagonals are e and ε, respectively. An (autonomous) MPL system is defined as where A ∈ R n×n max is the matrix system and x(k) = [x 1 (k) . . . x n (k)] ⊤ is the state variables [3]. Traditionally, x represents the time stamps of the discrete-events, while k corresponds to an event counter. Definition 1 (Precedence Graph [3]). The precedence graph of A, denoted by G(A), is a weighted directed graph with nodes 1, . . . , n and an edge from j to i with weight A(i, j) if A(i, j) = ε. The weight of a path p = i 1 i 2 . . . i k is equal to the total weight of the corresponding edges i.e. w(p) = A(i 2 , i 1 )+. . .+A(i k , i k−1 ). Definition 2 (Regular (Row-Finite) Matrix [4]). A matrix A ∈ R n×n max is called regular (or row-finite) if there is at least one finite element in each row. The following notations deal with a row-finite matrix A ∈ R n×n max . The coefficient g = (g 1 , . . . , g n ) ∈ {1, . . . , n} n is called finite coefficient iff A(i, g i ) = ε for all 1 ≤ i ≤ n. We define the region matrix of A w.r.t. the finite coefficient g as One can say that A g is a matrix that keeps the finite elements of A indexed by Definite Forms of Tropical Matrices The concept of definite form over a tropical matrix was firstly introduced in [15]. Consider a given A ∈ R n×n max and let α be one of the maximal permutations 1 of In this paper, we allow for a generalisation of the notion of definite form. We generate the definite form from the finite coefficients introduced above. Notice that the maximal permutation is a special case of finite coefficient g = (g 1 , . . . , g n ) when all g i are different. Intuitively, the definite form over a finite coefficient g is established by; 1) column arrangement of A using g i.e. B(·, j) = A(·, g j ) and then 2) subtracting each column by the corresponding diagonal element i.e. A g (·, j) = B(·, j) − B(j, j) for all j ∈ {1, . . . , n}. Furthermore, we define two types of definite forms. We call the definite form introduced in [15] to be a column-definite form. We define as an additional form the row-definite form g A. The latter form is similar to the former, except that now the row arrangement is used, namely B(g i , ·) = A(i, ·) for all i ∈ {1, . . . , n}. Notice that, in a row arrangement, one could find two or more different rows of A are moved into the same row at B. As a consequence, some rows of B remain empty. In these cases, ε is used to fill the empty rows. For rows with multiple entries, we take the maximum point-wise after subtracting by the corresponding diagonal element. Example 1. Consider a tropical matrix and a finite coefficient g = (2, 1, 1). The row-definite form for g is On the other hand, the column-definite form w.r.t. g is Proposition 1. The column-definite and row-definite form of A ∈ R n×n max w.r.t. a finite coefficient g are A g = A ⊗ A c g and g A = A c g ⊗ A, respectively. Difference Bound Matrices as Tropical Matrices This section discusses the idea of treating Difference Bound Matrices as tropical matrices, and some related properties. Definition 3 (Difference Bound Matrices). A DBM in R n is the intersection of sets defined by The variable x 0 is set to be equal to 0. The dummy variable x 0 is used to allow for the single-variable relation x i ∼ c, which can be written as x i − x 0 ∼ c. Definition 3 slightly differs from [14] as we use operators {>, ≥} instead of {<, ≤}. The reason for this alteration is to transfer DBMs into the tropical domain. A DBM in R n can be expressed as a pair of matrices (D, S). The element D(i, j) stores the bound variable d i,j , while S(i, j) represents the sign matrix of the operator i.e. S(i, j) = 1 if ∼ i,j = ≥ and S(i, j) = 0 otherwise. In the case of i = j, it is more convenient to put D(i, i) = 0 and S(i, i) = 1, as it corresponds to Notice that, under Definition 3, each DBM D in R n is an (n+ 1)-dimensional tropical matrix. Throughout this paper, we may not include the sign matrix whenever recalling a DBM. Some operations and properties in tropical algebra can be used for DBM operations, such as intersection, computation of the canonical form, and emptiness checking. Such DBM operations are key for developing abstraction procedures. The sign matrix for D 1 ⊕ D 2 is determined separately as it depends on the operator of the tighter bound. More precisely, suppose that S 1 , S 2 and S are the sign matrices of D 1 , D 2 and of D 1 ⊕ D 2 respectively, then Any DBM admits a graphical representation, called the potential graph, interpreting the DBM D as a weighted directed graph [16]. Because each DBM is also a tropical matrix, the potential graph of D can be viewed as a precedence graph G(D). The canonical-form of a DBM D, denoted as cf(D), is a DBM with the tightest possible bounds [14]. The advantage of the canonical-form representation is that emptiness checking can be evaluated very efficiently. Indeed, for a canonical DBM (D, S), if there exist 0 ≤ i ≤ n such that D(i, i) > 0 or S(i, i) = 0 then the DBM corresponds to an empty set. Computing the canonical-form representation is done by the all-pairs shortest path (APSP) problem over the corresponding potential graph [14,16]. (As we alter the definition of the DBM, it is now equal to all-pairs longest path (APLP) problem.) One of the prominent algorithms is Floyd-Warshall [18] which has a cubic complexity w.r.t. its dimension. On the other hand, in a tropical algebra sense, [D ⊗m ](i, j) corresponds to the maximal total weights of a path with length m from j to i in G(D). Furthermore, [ n+1 m=0 D ⊗m ](i, j) is equal to the maximal total weights of a path from j to i. Thus, n+1 m=0 D ⊗m is indeed the solution of APLP problem. Proposition 3 provides an alternative computation of the canonical form of a DBM D based on tropical algebra. Proposition 4 relates non-empty canonical DBMs with the notion of definite matrix. A tropical matrix A is called definite if per(A) = 0 and all diagonal elements of A are zero [17]. MPL Abstractions Using Tropical Operations This section introduces the concept of tropical abstractions. Firstly, the comparison with the abstraction method in [9] is described. Then, we provide a new procedure to generate abstract states and transitions based on tropical algebra. Related Work The notion of abstraction of an MPL system has been first discussed in [9]. The procedure starts by transforming the MPL system characterised by A ∈ R n×n max into a PWA (piece-wise affine) model [9, Algorithm 2], and then considering the partitions associated to the obtained PWA [9, Algorithm 6]. The abstract states associated to the partitions are represented by DBMs. The transitions are then generated using one-step forward-reachability analysis [9]: first, the image of each abstract state w.r.t. the MPL system is computed; then, each image is intersected with partitions associated to other abstract states; finally, transition relations are defined for each non-empty intersection. This procedure is summarised in [9,Algorithm 7]. The computation of image and of inverse image of a DBM is described in [12]. These computations are used to perform forward and backward reachability analysis, respectively. The worst-case complexity of both procedures is O(n 3 ), where n is the number of variables in D excluding x 0 . A more detailed explanation about image and inverse image computation of a DBM is in Section 3.3. Generating the Abstract States We begin by recalling the PWA representation of an MPL system characterised by a row-finite matrix A ∈ R n×n max . It is shown in [10] that each MPL system can be expressed as a PWA system. The PWA system comprises of convex domains (or PWA regions) and has correspondingly affine dynamics. The PWA regions are generated from the coefficient g = (g 1 , . . . , g n ) ∈ {1, . . . , n} n . As shown in [9], the PWA region corresponding to coefficient g is Notice that, if g is not a finite coefficient, then R g is empty. However, a finite coefficient might lead to an empty set. Recall that the DBM R g in (5) is not always in canonical form. Definition 4 (Adjacent Regions [9, Def. 3.10]). Suppose R g and R g ′ are non-empty regions generated by (5). These regions are called adjacent, denoted by The affine dynamic of a non-empty R g is Notice that Equation (6) can be expressed as where A g is a region matrix that corresponds to a finite coefficient g. As mentioned before, a PWA region R g is also a DBM. The DBM R g has no dummy variable x 0 . For simplicity, we are allowed to consider R g as a matrix, that is R g ∈ R n×n max . We show that R g is related to the row-definite form w.r.t. the finite coefficient g. Algorithm 1 provides a procedure to generate the PWA system from a rowfinite A ∈ R n×n max . It consists of: 1) generating region matrices (line 3) and their conjugates (line 4), 2) computing the row-definite form (line 5), and 3) emptiness checking of DBM R g (lines 6-7). The first two steps are based on tropical operations while the last one is using the Floyd-Warshall algorithm. The complexity of Algorithm 1 depends on line 6; that is O(n 3 ). The worst-case complexity of Algorithm 1 is O(n n+3 ) because there are n n possibilities at line 1. However, we do not expect to incur this worst-case complexity, especially when a row-finite A has several ε elements in each row. In [9], the abstract states are generated via refinement of PWA regions. Notice that, for each pair of adjacent regions R g and R g ′ , R g ∩ R g ′ = ∅. The intersection of adjacent regions is removed from the region with the lower index. Instead of removing the intersection of adjacent regions, the partition of PWA regions can be established by choosing the sign matrix for R g i.e. S g . As we can see in (5), all operators are ≥. Thus, by (5), S g (i, j) = 1 for all i, j ∈ {1, . . . , n}. In this paper, we use a rule to decide the sign matrix of R g as follows This rule guarantees empty intersection for each pair of region. Algorithm 1: Generating the PWA system using tropical operations Input : A ∈ R n×n max , a row-finite tropical matrix Output: R,A, a PWA system over R n where R is a set of regions and A represent a set of affine dynamics 1 for g ∈ {1, . . . , n} n do 2 if g is a finite coefficient then Algorithm 2 is a modification of Algorithm 1 by applying rule in (7) before checking the emptiness of R g . Notation R g := (R g , S g ) in line 7 is to emphasise that DBM R g is now associated with S g . It generates the partitions of PWA regions which represent the abstract states of an MPL system characterised by A ∈ R n×n max . The worst-case complexity of Algorithm 2 is similar to that of Algorithm 1. Remark 1. The resulted R g in Algorithm 1 and Algorithm 2 is an n-dimensional matrix which represents a DBM without dummy variable x 0 . This condition violates Definition 3. To resolve this, the system matrix A ∈ R n×n max is extended into (n + 1)-dimensional matrix by adding the 0 th row and column as follows As a consequence, the finite coefficient g is now an (n + 1)-row vector g = (g 0 , g 1 , . . . , g n ) where g 0 is always equal to 0. For the rest of this paper, all matrices are indexed starting from zero. As explained in [9], each partition of PWA regions is treated as an abstract state. Therefore, the number of abstract states is equivalent to the cardinality of partitions. SupposeR is the set of abstract states, thenR is a collection of all non-empty R g generated by Algorithm 2. Image and Inverse Image Computation of DBMs This section describes a procedure to compute the image of DBMs w.r.t. affine dynamics. First, we recall the procedures from [12]. . The image of D over the given affine dynamics is generated by removing all inequalities containing The above procedure can be improved by manipulating DBM D directly from the affine dynamics. By (6), one could write x ′ i = x gi + A g (i, g i ) where x i and x ′ i represent the current and next variables, respectively. For each pair (i, j), we have This relation ensures that the bound of x ′ i − x ′ j can be determined uniquely from x gi − x gj and A g (i, g i ) − A g (j, g j ). Proposition 6. The image of a DBM D w.r.t. affine dynamics Algorithm 3 shows a procedure to generate the image of (D, S) w.r.t. the affine dynamics represented by x ′ = A g ⊗ x. It requires DBM (D, S) located in a PWA region R g . This means that there is exactly one finite coefficient g such that (D, S) ⊆ R g . The complexity of Algorithm 3 is in O(n 2 ) as the addition step at 4 line has complexity of O(1). As an alternative, we also show that the image of a DBM can be computed by tropical matrix multiplications with the corresponding region matrix A g . The procedure to compute the image of DBM D w.r.t. MPL system can be viewed as the extension of Algorithm 3. Before applying Algorithm 3, the DBM D is intersected with each region of the PWA system. Then, for each nonempty intersection we apply Algorithm 3. The worst-case complexity is O(|R|n 2 ) where |R| denotes the number of PWA regions. In [12], the procedure to compute the inverse image of D ′ w.r.t. affine dynamics involves: 1) constructing DBM D that consists of D ′ and its corresponding affine dynamics, 2) generating the canonical form of D and 3) removing all inequalities with primed variables. The complexity of computing the inverse image using this procedure is O(n 3 ) as it involves the emptiness checking of a DBM [12]. Example 4. Let us compute the inverse image of The DBM generated from D ′ and the affine dynamic is The inverse image of D ′ over the given affine dynamic is computed by removing all inequalities containing The inverse image of D ′ can be established by manipulating D ′ from the affine dynamics. Notice that, from (6), we have Unlike the previous case, it is possible that x gi − x gj has multiple bounds. This happens because there is a case g i1 = g i2 but i 1 = i 2 . In this case, the bound of x gi − x gj is taken from the tightest bound among all possibilities. Similar to Algorithm 3, Algorithm 4 has complexity in O(n 2 ). In tropical algebra, the procedure of Algorithm 4 can be expressed as tropical matrix multiplications using a region matrix and its conjugate. The procedure to compute the inverse image of DBM D ′ w.r.t. MPL system can be viewed as the extension of Algorithm 4. First, we compute the inverse image of DBM D ′ w.r.t. all affine dynamics. Then each inverse image is intersected with the corresponding PWA region. The worst-case complexity is O(|R|n 2 ). Generating the Abstract Transitions As we mentioned before, the transition relations are generated by one-step forward-reachability analysis, and involve the image computation of each ab-stract state. SupposeR = {r 1 , . . . ,r |R| } 2 is the set of abstract states generated by Algorithm 2. There is a transition fromr i tor j if Im(r i ) ∩r j = ∅, where Im(r i ) = {A ⊗ x|x ∈r i } which can be computed by Algorithm 3. Notice that, each abstract state corresponds to an unique affine dynamics. The procedure to generate the transitions is summarized in Algorithm 5. Computational Benchmarks We compare the run-time of abstraction algorithms in this paper with the procedures in VeriSiMPL 1.4 [13]. For increasing n, we generate matrices A ∈ R n×n max with two finite elements in each row, with value ranging between 1 and 100. The location and value of the finite elements are chosen randomly. The computational benchmark has been implemented on a high-performance computing cluster at the University of Oxford [19]. 2R is the collection of non-empty Rg. We use small letterri for sake of simplicity. We run the experiments for both procedures (VeriSiMPL 1.4 and Tropical) using MATLAB R2017a with parallel computing. Over 10 different MPL systems for each dimension, Table 1 shows the running time to generate the abstract states and transitions. Each entry represents the average and maximal values. With regards to the generation of abstract states, the tropical algebra based algorithm is much faster than VeriSiMPL 1.4. As the dimension increases, we see an increasing gap of the running time. For a 12-dimensional MPL system over 10 independent experiments, the time needed to compute abstract states using tropical based algorithm is less than 1 second. In comparison, average running time using VeriSiMPL 1.4 for the same dimension is 8.34 seconds. For the generation of transitions, the running time of tropical algebra-based algorithm is slightly faster than that of VeriSiMPL 1.4. We remind that the procedure to generate transitions involves the image computation of each abstract state. In comparison to the second and fourth columns of Table 1, Table 2 shows the running time to compute the image of abstract states. Each entries represents the average and maximum of running time. It shows that our proposed algorithm for image computation of DBMs is faster than VeriSiMPL 1.4. We also compare the running time algorithms when applying forward-and backward-reachability analysis. We generate the forward reach set [9,Def 4.1] and backward reach set [9,Def 4.3] from an initial and a final set, respectively. In more detail, suppose X 0 is the set of initial conditions; the forward reach set X k is defined recursively as the image of X k−1 , namely On the other hand, suppose Y 0 is a set of final conditions. The backward reach set Y −k is defined via the inverse image of Y −k+1 , where n is the dimension of A. We select X 0 = {x ∈ R n : 0 ≤ x 1 ≤ 1, . . . , 0 ≤ x n ≤ 1} and Y 0 = {y ∈ R n : 90 ≤ y 1 ≤ 100, . . . , 90 ≤ y n ≤ 100} as the sets of initial and final conditions, respectively. The experiments have been implemented to compute the forward reach sets X 1 , . . . , X N and the backward reach sets Y −1 , . . . , Y −N for N = 10. Notice that it is possible that the inverse image of Y −k+1 results in an empty set: in this case, the computation of backward reach sets is terminated, since Y −k = . . . = Y −N = ∅. (If this termination happens, it applies for both VeriSiMPL 1.4 and the algorithms based on tropical algebra.) Table 3 reports the average computation of PWA system and reach sets over 10 independent experiments for each dimension. In general, algorithms based on tropical algebra outperform those of VeriSiMPL 1.4. For a 15-dimensional MPL system, the average time to generate PWA system using VeriSiMPL 1.4 is just over 20 seconds. In comparison, the computation time for tropical algorithm is under 5 seconds. Tropical algorithms also show advantages to compute reach sets. As shown in Table 3, the average computation time for forward and backward-reachability analysis is slightly faster when using tropical procedures. There is evidence that the average time to compute the backward reach sets decreases as the dimension increases. This happens because the computation is terminated earlier once there is a k ≤ N such that Y −k = ∅. Notice that, this condition occurs for both VeriSiMPL 1.4 and the new algorithms based on tropical algebra. Conclusions This paper has introduced the concept of MPL abstractions using tropical operations. We have shown that the generation of abstract states is related to the row-definite form of the given matrix. The computation of image and inverse image of DBMs over the affine dynamics has also been improved based on tropical algebra operations. The procedure has been implemented on a numerical benchmark and compared with VeriSiMPL 1.4. Algorithm 2 has showed a strong advantage to generate the abstract states especially for high-dimensional MPL systems. Algorithms (Algorithms 3-5) for the generation of transitions and for reachability analysis also display an improvement. For future research, the authors are interested to extend the tropical abstractions for non-autonomous MPL systems [3], with dynamics that are characterised by non-square tropical matrices. On the other hand, as all diagonal elements of D are zero, we have per(D) ≥ n i=0 D(i, i) = 0. Hence, we can conclude that per(D) = 0 with identity permutation is one of the maximal permutations. A.5 Proof of Proposition 5 Proposition 5. For each finite coefficient g, R g = g A ⊕ I n . Proof: Notice that, the value of A(i, j) − A(i, g i ) in (5) corresponds to R g (g i , j). Furthermore, we have where i * ∈ {1, . . . , n} such that g i * = g i . In the inequality part of (5), one may find the multiple bounds for x gi − x j . This happens whenever g is not a permutation. In that case, the bound for x gi − x j is the maximum value of all corresponding bounds. Thus, R g (g i , j) = i * (A(i * , j) − A(i * , g i )) = g A(g i , j) for all i, j ∈ {1, . . . , n}. From here we cannot write R g = g A because g A admits an infinite diagonal element while R g is not. However, all diagonal elements in g A⊕I n are 0. Therefore, R g = g A⊕I n . On the other hand, if g is a permutation, we have R g = g A = g A ⊕ I n . A.6 Proof of Proposition 6 Proposition 6. The image of a DBM D w.r.t affine dynamics x ′ i = x gi + A g (i, g i ) for i ∈ {1, . . . , n} is a set characterized by D ′ = n i=1 n j=1 {x ′ ∈ R n |x ′ i − x ′ j = x gi − x gj + A g (i, g i ) − A g (j, g j )}, where the bound of x gi − x gj is taken from D. Proof: Suppose D ′ is the image of D w.r.t. the given affine dynamics. The DBM D ′ can be computed by manipulating D from the affine dynamics. For each pair (i, j), we have x ′ i − x ′ j = x gi − x gj + A g (i, g i ) − A g (j, g j ). From here, we can infer that the bound for x ′ i − x ′ j (primed version) corresponds to the bound of x gi − x gj (non-primed version) and a scalar A g (i, g i ) − A g (j, g j ). Therefore, A.7 Proof of Proposition 7 Proposition 7. The image of DBM D ∈ R n w.r.t. affine dynamic If D ′ is expressed as a matrix then D ′ (i, j) = D(g i , g j )+A g (i, g i )−A g (j, g j ) for i, j ∈ {0, . . . , n} under a convention x 0 = g 0 = 0 and D(0, 0) = A g (0, 0) = 0. On the other hand, [A g ⊗ D ⊗ A c g ](i, j) = n k=0 (A g (i, k) ⊗ ( n l=0 D(k, l) ⊗ A c g (l, j))). Notice that, for a fixed j there is an unique l such that A c g (l, j) = ε i.e. l = g j . Similarly, for a fixed i, A g (i, k) = ε iff k = g i . Therefore, [A g ⊗ D ⊗ A c g ](i, j) = A g (i, g i ) + D(g i , g j ) + A c g (g j , j) = D(g i , g j ) + A g (i, g i ) − A g (j, g j ) = D ′ (i, j). A.8 Proof of Proposition 8 Proposition 8. The inverse image of DBM D ′ w.r.t. affine dynamics x ′ i = x gi + A g (i, g i ) for i ∈ {1, . . . , n} is a set characterized by D = has multiple bounds up to the number of different pairs (i * , j * ). The tightest bound of x gi − x gj is equal to the maximum one; that is, D(g i , g j ) = i * j * (−A g (i * , g i ) + D ′ (i * , j * ) + A g (j * , g j )) = [A c g ⊗ D ′ ⊗ A g ](g i , g j ). From here, we have D(i, j) = [A c g ⊗ D ′ ⊗ A g ](i, j) if both i and j are in G. If i ∈ G or j ∈ G then D(i, j) = ε = [A c g ⊗ D ′ ⊗ A g ](i, j). However, as the diagonal elements of D are not allowed be to non-negative, we have D = (A c g ⊗ D ′ ⊗ A g ) ⊕ I n+1 .
2018-06-12T15:24:01.000Z
2018-06-12T00:00:00.000
{ "year": 2018, "sha1": "c0d9ace261c2658e15271fa63d1104773b947d13", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1806.04604", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "73f7f8d35431fa6ff87d0009ab5c3a1d7360a54f", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
16001536
pes2o/s2orc
v3-fos-license
Particle Propagation on a Circle with a Point Interaction We study a particle propagation on a circle in the presence of a point interaction. We show that the one-particle Feynman kernel can be written into the sum of reflected and transmitted trajectories which are weighted by the elements of the n-th power of the scattering matrix evaluated on a line with a point interaction. As a by-product we find three-parameter family of trace formulae as a generalization of the Poisson summation formula. Introduction Quantum system restricted on a bounded domain has become more relevant for theoretical physics. There, the role of boundary conditions are very important not only for the long distance (infrared) regime but also for the short distance (ultraviolet) regime. Mathematically, the correct framework to treat the boundary conditions in quantum theory is by means of the analysis of von Neumann's self-adjoint extension of the Hamiltonian operator [1]. Physically speaking, the variety of boundary conditions provided by the self-adjoint extension of the Hamiltonian implies that the very rich structure of point interactions available in quantum theory. The analysis of self-adjoint extension of the Hamiltonian, as the name suggests, is essentially based on the Hamiltonian operator approach. However, in the Feynman's pathintegral approach, we do not know a priori how to incorporate the boundary conditions into the integration measure nor into the path-integral weight. As discussed in many textbooks (see for example [2,3]), the naive path-integral representation for a system on a bounded domain leads to a wrong boundary behavior and hence requires modification. The most rigorous way to incorporate the boundary conditions into the Feynman kernel is to evaluate it by the operator formalism. However, the kernel evaluated by the operator formalism becomes the summation over the energy spectrum. In order to switch to the path-integral description, we have to perform resummation of the energy spectrum to the paths of the space. In general, this resummation is accomplished by trace formulae. Trace formulae provide a direct connection between quantum energy spectrum and classical length spectrum (periodic orbits). However, this connection is in general an asymptotic relation valid for large wave numbers, just as in the case of the Gutzwiller trace formula [4]. There are only few cases where the trace formulae become identities. Noteworthy among these are the Poisson summation formula and the Selberg trace formula [5], the former is the trace formula for the Laplace operator on flat tori and the latter on Riemannian manifolds with constant negative curvature. Although in the operator formalism point interactions have been extensively discussed in the literature, the path-integral description of point interactions has not been fully understood yet. In mathematically speaking, this is mainly due to the lack of trace formulae suitable for the point interactions. In physically speaking, on the other hand, this is mainly due to the lack of our knowledge about the classical trajectories for a particle in the presence of point interactions. The aim of this paper is to fulfill a gap of the description for boundary conditions between the operator formalism and the path-integral formalism: we would like to propose a physically transparent prescription how to incorporate the boundary conditions obtained in the operator formalism into the path-integral description. To illustrate our idea in a simple setting, in this paper we will consider a one-particle quantum mechanics on a circle in the presence of a single point interaction. To begin with, let us first consider a quantum particle on a circle of circumference L in the presence of a δ ′ -interaction described by the Hamiltonian H = −d 2 /dx 2 + 2cδ ′ (x), where c ∈ R is the dimensionless coupling constant and prime (′) indicates the derivative with respect to x. (Here as in the following we are using units where = 2m = 1.) It is known that the δ ′ -interaction belongs to the so-called scale-independent subfamily of point interactions [6] and is verified by the boundary conditions ψ [7,8], where ψ is the square integrable wave function on an interval (0, L). Although the Feynman kernel of this system has been analyzed in the literature [9,10], the physical interpretation for the weight factors (see below) remains open. We would like to first address this issue. As ubiquitous in the scale-independent point interactions, in this δ ′ -interaction case the wave numbers are quantized in an integer step so that it is easy to rewrite the Feynman kernel K(x, T ; x 0 , 0) = x|e −iHT |x 0 evaluated in the operator formalism into the pathintegral representation with the help of the Poisson summation formula. The resultant kernel takes the form where 0 ≤ θ := Arccos[(1 − c 2 )/(1 + c 2 )] < π and − (+) sign for c > 0 (c < 0). Arccos is the principal value of the inverse cosine. Notice that the presence of a point interaction breaks the global translational invariance. As a consequence the kernel (1) is the sum of partial amplitudes for the translational invariant and variant classes of trajectories, which are weighted by the factors cos(nθ) and sin(nθ) respectively. Before going to discuss the physical meaning of the weight factors, we have to reveal the particle propagations described by (1). To this end, it should first be noted that c = 0 leads to θ = 0 so that Eq.(1) becomes the well-known form of the one-particle Feynman kernel on a circle with periodic boundary conditions. As discussed in many textbooks (see for example Ref. [2,3]), in this c = 0 case the kernel (1) is the sum of partial amplitudes for transitions via classical paths distinguished by the homotopy class of S 1 , i.e., the winding number. For nonzero c, however, the classical trajectories of a particle are not so trivial due to the presence of δ ′ -potential, which acts as a point scatterer. When a particle reaches to the position of the point scatterer, there must be in general two possibilities: reflection or transmission. Thus the paths for a particle interacting n-times to the point interaction must consist of 2 n distinct paths. As an example the classical world lines for n = 2 and −2 in (1) are depicted in Figure 1. As we will see below, the half of these 2 n paths belongs to the translational invariant class and the other half of them to the translational variant class. Now it is time to discuss the physical meaning of the weight factors. It is intuitively clear that reflected trajectory should be weighted with a reflection coefficient R + (R − ) for every time when a particle is reflected by the point scatterer from left to left (right to right), where R + (R − ) is the reflection coefficient for a particle propagating the negative half-line R − (positive half-line R + ). Similarly, transmitted trajectory should be weighted with a transmission coefficient T + (T − ) for every time when a particle is transmitted by the point scatterer from left to right (right to left), where T + (T − ) is the transmission coefficient for a particle propagating from R − to R + and vice versa. The physical meaning of the weight factors is now obvious: these must be elements of the n-th power of the oneparticle scattering matrix S (1) , which we would like to call the n-times scattering matrix S (n) , evaluated on a line with a point interaction at the origin (see Section 3). In the case of δ ′ -interaction, it is easy to compute the one-particle scattering matrix. The result is which is just the rotational matrix. Thus the n-times scattering matrix is given by just replacing the argument θ in (2) to nθ. These matrix elements are nothing but the weight Figure 1: Classical world lines for a particle scattered twice by the point interaction. Time is flowing along the vertical direction. The dashed line represents the world line for the point interaction. In the case of δ ′ -interaction with c > 0, these 2 × 2 2 = 8 trajectories are weighted by the factors T ± T ± = cos 2 θ, R ∓ R ± = − sin 2 θ, R ± T ± = ∓ sin θ cos θ and T ∓ R ± = ∓ cos θ sin θ. factors in (1). In this sense the identity cos 2 (nθ) + sin 2 (nθ) = 1 is a consequence of the unitarity of the scattering matrix and can be viewed as the partial amplitude unitarity. So far we have studied only the case of δ ′ -interaction, it seems that the above discussion is valid for any one-particle quantum mechanics on a circle with a single point interaction. As we will show in the rest of this paper this observation is indeed true. Now it is time to give an explicit statement for the purpose of this paper. The main goal of this paper is to show the following statement: the Feynman kernel for a spinless particle moving freely on a circle of circumference L with a single point interaction at the origin can be written into the following generic form: where S ±∓ are the elements of n-times scattering matrix. This statement is based on the following observations: 1. The classical trajectories x cl (t) for a particle propagating from (x 0 , 0) to (x, T ) scattered n-times by the point interaction are exhausted by x cl (t) = x 0 + v cl t, where v cl = (±nL+x−x 0 )/T for translational invariant class and v cl = ((±n+1)L−x−x 0 )/T for translational variant class of classical trajectories, where '+' sign is for the trajectory of right-moving outgoing particle and '−' sign is for that of left-moving outgoing particle. 2. Any paths for a particle traveling from (x 0 , 0) to (x, T ) with momentum p are categorized into the four cases, that is, the propagation from left to right, from left to left, from right to left, and from right to right. The corresponding plane waves are e ip(nL+x−x 0 ) , e ip((n+1)L−x−x 0 ) , e −ip(−nL+x−x 0 ) and e −ip((−n+1)L−x−x 0 ) , respectively. These four classes of classical trajectories should be weighted by the factors S 3. The bound state contribution, even if it exists, does not affect the scattering process on a line such that it can be added at the end of computation. The purpose of this paper is to show the validity of (3) for allowed point interactions in quantum mechanics, which can be classified, as mentioned before, by means of the analysis of the self-adjoint extension of the Hamiltonian operator. In physical language, the self-adjoint extension of the Hamiltonian is translated into the requirement for the global conservation of the probability current density with ψ being the wave function on the Hilbert space consisting of square integrable functions on the interval (0, L). Quantum mechanical system for a free particle on a circle is known to admit a U (2) family of distinct point interactions characterized by the boundary conditions [9,10] where U is a 2 × 2 unitary matrix and L 0 is an arbitrary real constant length scale, which is just introduced to adjust the length dimension of the equation. For the following discussions it is convenient to parameterize the matrix U ∈ U (2) as the following spectral decomposition form: where σ = (σ 1 , σ 2 , σ 3 ) is a vector of the Pauli matrices, e iα ± (0 ≤ α ± < 2π) are the two eigenvalues of the unitary matrix U and P ± is the corresponding projection operators fulfilling P + + P − = 1l, (P ± ) 2 = P ± and P ± P ∓ = 0. e = (e x , e y , e z ) is a real unit vector satisfying the condition e 2 x + e 2 y + e 2 z = 1. In this paper we derive analytical forms of the one-particle Feynman kernel with these parameters. It is worthwhile to point out here that if we multiply the projection operators P ± to (4) on the left, the boundary conditions boil down to the following two independent equations where It should be noted that (8) is not well-defined when α ± = 0. We will, however, use (8) instead of (4) as the boundary conditions by taking a careful limit for the case of α ± = 0. The rest of this paper is organized as follows. In Section 2 we derive the general forms of the reflection and transmission coefficients for a particle on a whole line in the presence of a point interaction at the origin. In Section 3 we define the one-particle scattering matrix on R \ {0} and then introduce the n-times scattering matrix. Section 4 is devoted to detailed analysis of the spectral property for the free Hamiltonian (i.e. Laplace operator) on S 1 \{0}. As a by-product we find three-parameter family of trace formulae which provide a direct connection between quantum energy spectrum and classical length spectrum of S 1 with a point singularity. These can be regarded as generalizations of the Poisson summation formula. In Section 5 we give a proof of (3). In Section 6 we present explicit examples of the Feynman kernels for several subfamilies of the U (2) family of point interactions. We conclude in Section 7. Reflection and transmission coefficients on R \ {0} Particle collision and production processes are absent from the one-particle quantum mechanics with a single point interaction. Nevertheless, the reflection and transmission from the point interaction give rise to nontrivial scattering matrix. In this section we will calculate the matrix elements of scattering matrix, that is, the reflection and transmission coefficients for a continuum state once scattered by the point interaction at the origin on a whole line. The reflection and transmission coefficients for right-and left-moving incidental waves with momentum k > 0 are given by and Point interactions consistent with the probability conservation j(0 + ) = j(0 − ) are characterized by the same boundary conditions as (8) but with the two-component vectors Plugging (10a) and (10b) into the boundary conditions (8) with (11) we get the matrix equations where and Arccot and Log are the principal values of the inverse cotangent and logarithm, respectively. Now it is easy to find the reflection and transmission coefficients. Equation (12) implies that the matrix Z(k) is unitary and has the spectral decomposition Z(k) = e iδ + (k) P + +e iδ − (k) P − . Thus, from which we find These results are consistent with those obtained in Ref. [11,12] with suitable redefinitions of the parameters. Several remarks are now in order. 2. The phase shifts δ ± (k) satisfy the following functional identities where prime (′) indicates the derivative with respect to k. These identities will be important for the proof of (3). The reflection and transmission coefficients satisfy where * indicates the complex conjugation. 4. In terms of the reflection and transmission coefficients the unitarity conditions of the matrix Z(k) read 5. The phase shifts δ ± (k) become independent of momentum k if α ± = 0 or π: δ ± (k) = 0 for α ± = 0 and δ ± (k) = π for α ± = π. This is important for discussing the scaleindependent point interactions in Section 6. Scattering matrix on R \ {0} In this section we will first introduce the one-particle scattering matrix (S-matrix) and then define the n-times scattering matrix, whose elements give the weight factors of the Feynman kernel for the contributions scattered n-times by the point interaction. Let us first define the one-particle S-matrix S (1) on a whole line in the presence of a single point interaction at the origin. In the basis of right-and left-moving momentum mode {| + k , | − k | k > 0}, where x| ± k = e ±ikx , the one-particle S-matrix is defined as follows: whose matrix elements are graphically represented in Figure 2. Noting that the S-matrix can be written as S (1) (k) = Z(k)σ 1 and Z(k) is unitary, we see that S (1) (k) clearly satisfies the unitarity conditions which are nothing but the consequence of the probability conservation j(0 + ) = j(0 − ). For the following discussions it is convenient to rewrite the S-matrix into the spectral decomposition form where s ± (k) are the two eigenvalues of S (1) (k) given by and P ± (k) are the corresponding projection operators constructed as follows: where ε(k) = t (ε x (k), ε y (k), ε z (k)) is a real unit vector defined as Notice that these projection operators P ± (k) satisfy the relations P + (k) + P − (k) = 1l, [P ± (k)] 2 = P ± (k) and P ± (k)P ∓ (k) = 0 and that s ± (k) satisfy the relations Next introduce the n-times scattering matrix S (n) as the n-th power of S (1) : By construction it is obvious that the n-times scattering matrix S (n) (k) satisfies the unitarity conditions S (n) (k) † S (n) (k) = 1l = S (n) (k) S (n) (k) † , which lead the partial amplitude unitarity of the Feynman kernel. Thanks to the spectral decomposition (22) the n-times scattering matrix is easily computed with the result Although in the following discussions we do not need the explicit expression for S (n) , it may be instructive to write down its matrix elements. A straightforward calculation yields where T n and U n are the Chebyshev polynomials of the first and second kind, respectively, and satisfy the following relations: T n (cos θ) = cos(nθ), U n (cos θ) = sin (n + 1)θ sin θ , n = 0, 1, 2, · · · . (30) Spectrum of S 1 \ {0} Let us next study the spectrum of the quantum system for a particle on a circle in the presence of a point interaction described by the boundary conditions (4). Although the spectral property of the system has been already studied in the literature [9,10], those results are not suitable for the purpose of this paper. In this Section we will uncover an amazing relation between the scattering theory on R \ {0} discussed in the previous section and positive energy spectrum of S 1 \ {0}. We also derive the trace formulae for the free Hamiltonian (Laplace operator) on S 1 \ {0}. The general solution to the Schrödinger equation −d 2 ψ/dx 2 = Eψ on S 1 \ {0} for positive energy E = k 2 > 0 is given by where the phase factor e ikL in the second term is introduced for the later convenience. Notice that the two coefficients A(k) and B(k) may depend on k. The general solution for negative energy E = −κ 2 < 0 will be obtained by just replacing k to iκ in (31). We have to be, however, careful about zero energy solutions with E = 0, which are not necessarily obtained by the naive limit k → 0 in (31): The general solution for E = 0 is a first degree polynomial and takes the form In this paper, we call the above solution with B 0 = 0, as well as negative energy solutions, bound states. It turns out that any knowledge about those bound states are not necessary for the following discussions. Notice that a zero energy solution with the limit k → 0 in (31) is ambiguous because the two terms in (31) are not independent each other when k = 0. This issue will be discussed later. Substituting (31) into (8) we get the two independent conditions Since these two equations are orthogonal to each other, they can be combined into the following form: which follows from P + + P − = 1l and S (1) = (e iδ + P + + e iδ − P − )σ 1 . This eigenvalue equation indicates that the positive energy spectrum and eigenfunctions of single particle quantum mechanics on S 1 \ {0} is completely determined by the one-particle S-matrix on R \ {0}. In the following we will analyze the eigenvalue equation (34) in detail. Spectrum quantization conditions Let us first study the spectral property of S 1 \ {0}. For non-vanishing A(k) and B(k) we have to implement the following condition which has two branches e −ikL = s + (k) and e −ikL = s − (k), where s ± (k) are given in (23a). It should be pointed out that these types of equations are commonly referred to as the Bethe ansatz equations. Indeed, the generalization to nparticle system has been studied in the literature [12] under the name of impurity Bethe ansatz equation. The positive energy spectrum is determined as the positive roots of the equations where f ± (k) := kL + 1 i log s ± (k). It should be emphasized that we thought the logarithm function as the multivalued function defined as log z = {ln |z| + iArgz + i2mπ | 0 ≤ Argz < 2π, m ∈ Z}, where Argz is the principal value of the argument. Each integer m determines the branch of the logarithm function and m = 0 corresponds to the principal branch. Note that if lim k→0 s ± (k) = 1, the equation (37) may have zero energy solutions. However, the existence of such a zero energy solution dose not necessarily imply a physical state in the spectrum because the k = 0 solution in (31) becomes trivial and should be thrown away if A(0) + B(0) = 0, even though both A(0) and B(0) are not identically zero. Nevertheless, it turns out that such a fake solution is necessary in the trace formulae discussed in the next subsection and the proof of (3). Trace formulae In order to fulfill the gap between the operator formalism and the path-integral formalism, we have to establish the trace formulae for S 1 \ {0}. To this end, let us consider delta functions δ f ± (k) . Since the values assumed by f ± (k) are kL + Arg[s ± (k)] + 2mπ for all integers m, the delta functions δ f ± (k) are periodic functions of f ± with a period 2π so that it can be expanded into the Fourier series Note that the left hand side can be written as k ± , where σ ± are the sets of both positive and negative roots of the equations f ± (k) = 0 defined as Eq.(40) requires more detail explanations: • The negative roots (m < 0) are related to the positive ones as follows: which follow from (26) and (36). These relations will be used in the proof of (3). • The k ± 0 = 0 roots appear only in the following four cases: It turns out that the solution of k + 0 = 0 for α ± = 0 and one of the two k ± 0 = 0 solutions for α + = 0, α − = 0, e x = 1 or α + = −, α − = 0, e x = −1 are fake solutions with A(0) + B(0) = 0, as explained in the previous subsection. It is emphasized that the k ± 0 = 0 solutions (if they exist) must be included in σ ± , irrespective of a fake or genuine zero mode. We note that these remarks will be important for the proof of (3), however, they are not relevant for the rest of this subsection. Now, the identity (39) becomes the following 3-parameter family of the trace formulae which include the Poisson summation formula as a certain region of the parameter space spanned by α + , α − and e x . Notice that the derivatives of f ± (k) are given as follows: and satisfy As we will see in Section 4.4, f ′ ± (k ± m ) give the normalization factors for the positive energy eigenfunctions. Before closing this subsection we try to rewrite the formulae (43) in more practically convenient expression. . Thus, by multiplying a smooth test function F (k) and integrating out over the range −∞ < k < ∞, the trace formulae (43) can be cast into the following form: This identity will be useful for computations of the Casimir energy or the perturbative loop calculations of Feynman diagrams in quantum field theory with nontrivial extended defects (branes or boundaries). Eigenfunctions In terms of the orthonormal eigenvectors |± = t (A ± (k), B ± (k)), the energy eigenfunctions (31) are rewritten as follows: where the normalization factors N ± m are given by With the help of the identities (18), it is not difficult to show that the normalization factors can be written as follows Feynman kernel In this Section, we prove our main goal of the formula (3). To this end, let us first discuss the case of 0 < α ± < 2π. In the operator formalism the Feynman kernel is then given by We note that a fake k + 0 = 0 mode is not included in the above summation, as it should be. Substituting (51) into (54), we have By use of the relations (41) (45) (49a) (49b) (53) for 0 < α ± < 2π, we can rewrite (55) as We should notice that the summations over k ± m can be enlarged to σ ± and a fake k + 0 = 0 mode is added in (56) with the relation A + (k + 0 ) + B + (k + 0 ) = 0. Now we can use the trace formula (46): where the definitions (38) have been used. By use of the relations (26) (48a) (49a) (49b) (50a)-(50d) we finally arrive at the conclusion (3). It is interesting to point out that the final expression (3) holds for other values of α ± , even though the relations (41) (45) (49a) (49b) and the existence/nonexistence of a fake zero mode, as well as physical zero energy states, depend on α ± . Furthermore, we emphasize that the knowledge of the energy eigenvalues and eigenstates is required in the expression of the Feynman kernel (54), while only the one-particle scattering matrix is sufficient to represent the Feynman kernel in our formulation. This suggests that the expression (3) is more fundamental than the original one (54). Examples In this Section we present explicit examples of the Feynman kernels for several subfamilies of point interactions, which are partly classified in the literature [9,10] and summarized in Table 1. Since the time-reversal invariant subfamily and the PT -symmetric subfamily are quite involved, in what follows we will only consider the cases of the reflectionless subfamily (also known as the smooth subfamily), the scale-independent subfamily, the pure reflection subfamily (also known as the separated subfamily) and the parity invariant subfamily. As we will see below, our formalism recover all known results. [9,10]. Reflectionless subfamily Let us first consider the reflectionless point interaction as the simplest example because in this case the S-matrix becomes diagonal. Since S (1) (k) = Z(k)σ 1 , the diagonal S-matrix is obtained by the off-diagonal Z(k), which is given by e z = 0 and (α + , α − ) = (0, π) or (π, 0). Since the difference between the two cases (α + , α − ) = (0, π) and (π, 0) is just the overall sign of the S-matrix, without any loss of generality we can restrict ourselves to the case (α + , α − ) = (0, π) by using the parameterization e = (cos θ, sin θ, 0), 0 ≤ θ < 2π. With this parameterization the transmission coefficients are T ± = e ∓iθ so that the n-times scattering matrix becomes The Feynman kernel (3) is cast into the well-known form [2] K(x, T ; (Notice that there is no negative energy state contribution because the S-matrix has no pole.) It should be mentioned here the connection between the Laidlaw-DeWitt theorem [13] and our formalism. The theorem states that the path-integral in a multiply-connected space has to be taken the sum over the homotopy class of the space with a weight factor which forms a scalar unitary representation of the fundamental group of the space. In this well-known example, the weight factor is just a phase e −inθ . In the viewpoint of the Laidlaw-DeWitt theorem, this weight factor is a scalar unitary representation of π 1 (S 1 ) ∼ = Z, while in our viewpoint it is just the element of S (n) , which is essentially unitary thanks to the unitarity of the S-matrix. Scale-independent subfamily As a next example let us consider the scale-independent point interactions, which include the above reflectionless point interaction and the δ ′ -interaction discussed in Section 1. It is known that the scale-independent point interactions are characterized by the matrices U whose two eigenvalues are (1, 1), (−1, −1), (1, −1), (−1, 1): that is, (α + , α − ) = (0, 0), (π, π), (0, π), (π, 0) in (5) [9,10]. The first two cases also belong to the purely reflecting point interactions and will be considered in the next example. The latter two cases, however, are different only for the overall sign of the S-matrix, without any loss of generality we can restrict ourselves to the case (α + , α − ) = (0, π) by using the parameterization e = (cos θ, sin θ cos φ, sin θ sin φ) with 0 ≤ θ ≤ π and 0 ≤ φ < 2π. With this choice of the parameters, the S-matrix has the form S (1) = ( e · σ)σ 1 , which is just the constant matrix thanks to the scale-independence of the boundary conditions. Pure reflection subfamily Next consider the purely reflecting point interactions. These point interactions are obtained from the diagonal Z(k), which is realized by e x = e y = 0. With this choice the transmission coefficients identically vanish and the reflection coefficients become R ± (k) = e iδ ∓ (k) for e z = 1 and R ± (k) = e iδ ± (k) for e z = −1. In the following we will consider the case e z = 1. The result for e z = −1 will be obtained by replacing δ + to δ − and vice versa. In the e z = 1 case the n-times scattering matrix S (n) has the form In this case of the pure reflection subfamily, the physical meanings of δ + and δ − are clear: δ − (δ + ) is nothing but the phase shift which occurs every time when a particle hits the reflecting wall at x = L (x = 0) from the left (right). This is the reason why we call these δ ± the phase shifts. Conclusions and discussions In this paper we studied the particle propagation on a circle in the presence of a single point interaction compatible with the conservation of the probability current, or the selfadjoint extension of the Laplace operator −d 2 /dx 2 on S 1 \ {0}. We uncovered the classical trajectories for a quantum particle on S 1 \ {0}, which consist of 2 n distinct paths for a particle scattered n-times from the point interaction (point scatterer). We also illuminated deep connection between the scattering theory on R \ {0} and the spectral property of S 1 \ {0}, which, in roughly speaking, is summarized as the following correspondences: eigenvalues of S-matrix on R \ {0} ⇔ energy spectrum of S 1 \ {0}; eigenvectors of S-matrix on R \ {0} ⇔ energy eigenfunctions on S 1 \ {0}. We emphasize that the eigenvalues of the S-matrix depend only on the three parameters α + , α − and e x , whereas the eigenvectors depend on the full parameters of U (2). The reason will be explained as follows: Since the eigenvalues of the S-matrix on R \ {0} correspond to the energy spectrum of S 1 \ {0}, we here explain why the energy spectrum of S 1 \ {0} does not depend on e y and e z . We first point out that the parity operator is well-defined on S 1 \ {0}. Let us then consider the following (singular) unitary transformation: whereÛ which acts on the two-component vectors (5) as follows: Since the unitary transformation leaves the Hamiltonian invariant, the energy spectrum remains the same but the boundary condition (4) is changed as Thus, we found that the unitary transformation has an effect on (e y , e z ) by a rotation of the angle 2β, i.e. This implies that the energy spectrum should depend on not (e y , e z ) but their invariant e 2 y +e 2 z , which is identical to 1−e 2 x . Thus the spectrum depends only on the three parameters (α + , α − , e x ). The main success of this work is the systematic description for a one-particle Feynman kernel on a circle with a point interaction. The point is that we do not need any knowledge of the spectrum nor the complete set of energy eigenfunctions of the system (except for the bound states). What we have to know is the classical trajectories of a particle and the one-particle scattering matrix. We are left with a number of questions, however. Let us close with a few comments on these issues: 1. More rigorous foundation of partial amplitude unitarity. We showed that the particle propagation scattered n-times from the point interaction should be weighted by the elements of S (n) (k). As a direct consequence of the unitarity of the S-matrix, these weight factors satisfies the relation S (n) which we proposed to call partial amplitude unitarity. As briefly discussed in Section 6, in the case of reflectionless subfamily of point interactions our partial amplitude unitarity and the Laidlaw-DeWitt theorem seem to be the same thing. However, the theorem is essentially based on the homotopy theory so that it could not be applied in general to the other subfamily of point interactions. We thought that the partial amplitude unitarity would provide a wider notion than the Laidraw-DeWitt theorem and should be derived from some fundamental properties of the Feynman kernel such as the unitarity K(x, T ; x 0 , 0) = L 0 dyK(x, T ; y, t)K(y, t; x 0 , 0). Another related issue is an algebraic structure for the construction of classical trajectories for a particle on S 1 \ {0}. As mentioned before, the classical trajectories for a particle interacting n-times to the point interaction consist of 2 n distinct paths. These trajectories are constructed from the more fundamental trajectories depicted in Figure 2 by gluing them under the multiplication rule of the matrix S (1) . This fact implies that, in spite of the presence of a point singularity, it might be possible to introduce some kind of notion analogous to the fundamental group to the space S 1 \ {0}. However, we have no idea to treat these problems. Path-integral representation for the kernels. The Feynman kernels derived in this paper have the forms which will be obtained after performing the path-integration. It is very interesting to investigate its pathintegral representation. As mentioned in the Introduction, boundary conditions are treated unambiguously in operator formalism by von Neumann's theory of self-adjoint extension. However, in path-integral formalism we do not know a priori what kind of trajectories we should integrate over and what kind of "classical action" we should adopt as a weight. Indeed, even in the system of a free particle in an infinitely deep well potential, the "classical action" we should adopt as a path-integral weight includes the so-called topological term that localizes at the boundaries [3]. Furthermore, this topological term is proportional to the Planck constant so that it is no longer classical. To that aim we have to consider the limit N → ∞ of the equation: where δt := T /N . We already know the exact form of each partial amplitude x i+1 |e −iHδt |x i . In order to evaluate the right hand side we have to tackle with the N − 1 products of both translational invariant and variant classes of infinite sums. We would like to report this issue elsewhere.
2009-02-05T07:24:01.000Z
2009-02-05T00:00:00.000
{ "year": 2009, "sha1": "ce95c5a18d4b7dd927514ec4c391069a61e605a7", "oa_license": null, "oa_url": "http://arxiv.org/abs/0902.0855", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ce95c5a18d4b7dd927514ec4c391069a61e605a7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
245788885
pes2o/s2orc
v3-fos-license
Blockchain for the Future Development of the Pharmaceutical Industry . INTRODUCTION With the progress and innovation of science and technology, the application of blockchain has gradually penetrated many fields of economy and society. It has been well applied in product traceability, data circulation, supply chain management, judicial depositing, government data sharing, people's livelihood services, and other fields. Blockchain plays an obvious role in promoting economic and social development. Blockchain is an important part of the next generation of information technology. In 2008, an academic (team) under the pseudonym "Satoshi Nakamoto" first proposed a digital currency called Bitcoin in the Cryptography mailing list. In other words, without a central authority organization, people who do not trust each other can use Bitcoin to conduct transactions. Blockchain is derived from the underlying technology of Bitcoin. Blockchain technology (Blockchain 1.0), the core technology of the Bitcoin trading system, has been widely used. Blockchain can be termed meta-technology because it results from the integration of several other technologies such as software development, cryptographic technology, and database technology [1]. Blockchain technology is characterized by decentralization, high trust, and distributed technical solutions. At present, more and more scientific research institutions and technology companies at home and abroad participate in the research of blockchain and comprehensively apply blockchain technology in technological change and industrial upgrading. The initial application of blockchain technology is the original public ledger of bitcoin, which has later inspired other implementations called altchains. These kinds of networks also provide trust-based services that are not limited to currency transactions [2]. development of the new generation of the information technology industry. In recent years, in the context of China's "'13th Five-Year' National Informatization Plan ", "'13th Five-Year' National Population Health Informatization Development Plan, "and other favorable policies, blockchain technology is becoming the focus of national information construction. As a new field of blockchain application, the medical industry has received high attention. However, there are some inaccurate concepts, evaluations, and expectations regarding the application of blockchain technology. The potential value of the technology in the medical industry may cause some misunderstanding. These problems are worthy of public attention so that this paper can correctly use blockchain technology and look into the development trend of China's future medical industry. In December 2019, some medical institutions in Wuhan, Hubei province, reported patients with pneumonia of unknown cause. Novel Coronavirus 2019 pneumonia was classified as a Class B infectious disease in China under the control of Class A infectious disease [3]. However, there are a corresponding number of confirmed cases every day, indicating the necessity and urgency of standardized diagnosis and treatment. China's COVID-19 diagnosis and treatment program relies on mobile Internet technology and blockchain technology [4]. Through the blockchain system, early screening of COVID-19, effective and reasonable allocation of medical resources, active implementation of emergency medical rescue and joint diagnosis and treatment by medical confederate Internet remote experts, etc. It provides some reference for the prevention and control of novel coronavirus, a major health event in China. Nowadays, the application and development of health and big medical data involve many fields, such as clinical diagnosis and treatment, medical, scientific research, public health, chronic disease management, and artificial medical intelligence. The healthcare industry faces challenges in data security, privacy protection, data sharing, and interoperability. The distributed storage, tamper-free, encrypted storage, and other features of blockchain technology bring technical breakthroughs to break the bottleneck problems such as data isolated island and medical data sharing. It also provides appropriate solutions to the pain points and difficulties of data application in health and medical care. At the same time, it provides basic technical support for improving the level of medical service. In recent years, especially in the severe period of COVID-19, the application of blockchain technology in the medical industry has been highly developed and improved. The rational allocation of medical resources based on blockchain technology, expert consultation through the Internet Medical Alliance, electronic medical records, big data travel code, and the application of health code have all brought great convenience to the medical industry. At the same time, further use of the technology has raised concerns about data security, privacy, data sharing, and operability. The overall application of blockchain technology in the medical industry has been investigated and studied, and the overall conclusion is that the convenience brought by this application is significantly greater than some of the hidden dangers. However, the existing research results often ignore the opinions and views of the major participants and beneficiaries of the medical industry -the consumers (or the recipients of medical services) of the medical market. Therefore, our research mainly focuses on the perspective of medical service receivers to investigate and analyze whether the receivers support the increasingly in-depth application of blockchain technology in the medical industry today. The COVID-19 epidemic has posed a great threat to people's safety. Under this background, people began to pay more and more attention to the medical industry. Blockchain technology, as an emerging technical field, its application can provide the medical industry with faster and more convenient than ever before so that people can better get the guarantee of medical services and get greater benefits. Therefore, it is reasonable to expect that health care service recipients as a whole will support blockchain technology because they benefit from the convenience of its application but will have concerns due to issues such as transparency and privacy security. To confirm our point of view and try to find some innovative suggestions on blockchain application in the medical industry, this paper chose a questionnaire to conduct the research. The questionnaire mainly involved questions about the understanding and support of the technology, concerns about privacy leakage and information transparency, and suggestions on the application of the technology in the medical field. Through the collection and sorting of questionnaire data, it can be concluded that the attitude of medical service receivers towards the application of blockchain technology in the medical industry is basically consistent with the hypothesis, and they put forward some suggestions in the questionnaire. LITERATURE REVIEW In recent years there has been an increasing amount of literature on the blockchain because blockchain belongs to the category of new infrastructure in the world. It belongs to the new infrastructure information infrastructure because blockchain is a new way of asymmetric encryption algorithm, consensus mechanism, distributed storage, peer-to-peer transmission, and other related technologies. Therefore, blockchain can be Advances in Economics, Business and Management Research, volume 203 applied to many fields with broad applications and has good development prospects [5]. Key Definition There are many technical features of blockchain, so blockchain is getting more and more attention. However, the research will not show all the lists. Instead, it will explain those features that mainly benefit the pharmaceutical industry. [LB1] The first is invariance. It cannot be edited, deleted, or updated by any user on the network. Secondly, decentralization means storing anything starting from cryptocurrencies, important documents, contracts, or other valuable digital assets. Because a blockchain doesn't require any administrative privileges, blockchain would access it and store the assets directly from the web. Third, with enhanced security, blockchain provides stronger protection for users. It provides a special kind of camouflagecryptography, as cryptography in region refinement provides another layer of protection for users. That brings more security to the user's data. The blockchain industry includes upstream hardware, technology, and infrastructure; midstream blockchain application and technology services; and downstream blockchain application areas [6]. This paper will be conducting an in-depth study of the pharmaceutical industry to understand the pros and cons of blockchain for the pharmaceutical industry and consumer perceptions of blockchain use in the pharmaceutical industry. The pharmaceutical industry plays a vital role in every country, including the pharmaceutical industry and patient privacy and security [7]. Implication Because of the unique nature of the pharmaceutical industry, hospitals or private clinics have access to a large amount of confidential information about patients and their current physical conditions. The privacy, security, and tamper-proof nature of blockchain can protect consumers' personal information. Because blockchain differs from a typical database, it stores information; blockchain stores data in blocks and links them together. As new data comes in, it is fed into a new block. Once the block is filled with data, it is linked to the previous block, making the data linked together in chronological order. And decentralized blockchains are immutable, which means that the input data is irreversible. In modern areas, digital information flows from one end to the other through untrusted transmission channels. Here, privacy and confidentiality are major concerns. Blockchain technology provides secure peer-to-peer communication. In blockchain technology, transactions are publicly available, but they cannot be modified once recorded. For the consumer's physical condition, the transaction is permanently recorded and cannot be tampered with. In addition, He and Wang, in the article "Analysis of informatization application of medical management base on blockchain technology," carried out in favor of explaining that medical management informatization is an essential means to improve the health of all people under the special national conditions of a large population and scarce medical resources in China. And blockchain technology solves the problems of interconnection and interoperability of medical information resources, information confidentiality. It cannot tamper with ability, and information sharing, which is an essential technical support for the further development of the Internet and medical care in China. Consumers will have more peace of mind when using the Internet because blockchain technology will protect their privacy. Pharmaceutical companies can also use the confidentiality technology of blockchain to gain protection for their interests. However, there are still objections, and these can be negative for consumers. In the article "Security Problems on Blockchain: The State of the Art and Future Trends," Han and others took intensive research at blockchain security [8]. They argued in favor of the viability of blockchain but also list security problems with it. First, code vulnerability they used Bitcoin to explain that blockchains are vulnerable. Some cryptographic components can also be compiled with flaws and vulnerabilities that cause the trading platform to believe that the original transaction was not validated by the miners and generate a new transaction to pay the attacker again. If the attack is successful, the attacker gets double the bitcoins. The second privacy protection problem is that the data layer privacy protection technology provides basic privacy protection for users and transactions in the blockchain from the data structure perspective. Still, it cannot avoid the correlation between trades and user IP addresses in network transmission, leading to the attacker using listening and tracking IP addresses to infer the relationship between transactions and public key addresses, undermining the blockchain. This undermines the privacy protection goal of blockchain. These same issues can also affect the use of blockchain by pharmaceutical companies and or consumers in the event of an accident [9]. METHOD In this paper, the method of questionnaire survey was used to collect and analyze data. Because the survey sample range is large and the quantity is large. It helps eliminate the random error, which is difficult to avoid by means of measurement, and makes the research results from universal, objective, and persuasive. A questionnaire survey is a survey method. The designer uses the uniformly designed questionnaire to collect information from the respondents, ask for their opinions, collect the information, sort out the analysis, and conclude. It is a tool for collecting data in social investigation and research. Advances in Economics, Business and Management Research, volume 203 This paper takes the Chinese medical market as the object, investigates the relevant situation in the current medical market, the understanding of the blockchain technology, and analyzes the current status and problems of the application of the blockchain in the medical market, as well as the views of the entire medical industry environment on the application of the blockchain technology in the development. To test the hypotheses empirically, an online questionnaire was sent via a mobile phone app called Wen Juan Xing to approximately 300 meat consumers, and 192 participated in the survey. After sample screening, data consolidation, and logical test, and eliminating the questionnaires with logical errors or inconformity with the research requirements, the final remaining effective sample size was 151. The questionnaire was distributed and collected through the online mobile APP, which applied convenience sampling. To obtain and establish a representative sample, this research intentionally sent it out to people of various age groups and regions. Firstly, the reliability and validity of measurements are analyzed in this paper. Reliability test, also known as reliability test, is the method of repeated measurement to repeatedly verify the measurement results and measure the consistency degree. In this paper, SPSS was used to test the A reliability coefficient. Specifically, the reliability test method of Cronbach α coefficient is adopted. The Cronbach α reliability coefficient of the questionnaire was 0.833, indicating that all the above variables had good reliability and consistent stability. Secondly, difference analysis is carried out on the measurement results. According to the analysis of age variance, some loopholes in China's current medical industry system under different age conditions. At present, the optimization of the medical industry system cannot fundamentally change the status quo. Respondents have a certain understanding of blockchain technology, and blockchain technology can significantly improve the credit of the medical industry system. The P-value of which link of the medical industry system the blockchain technology is most helpful for credit improvement is far greater than 0.05, indicating no significant difference in these issues among different age groups. In this paper, multiple regression analysis was performed on the measured results. The survey mainly investigated the application of blockchain technology in the medical industry in China from six aspects: privacy, security, tamper-proof, convenience, accuracy, data release, and storage of medical record data. The questionnaire used in this study was in the form of a five-level scale, and five answer options were set for the subjects to make the only choice. The five answer choices 1, 2, 3, 4, and 5 represent the degree from dissatisfied to satisfied. When creating the original SPSS data file, the questionnaire code should set the corresponding variable value label according to this. RESULT Through the questionnaire survey on consumers' understanding, support, information security, privacy protection, and other aspects of blockchain technology, we could understand the health care service recipients' perception of the application of blockchain technology in the healthcare industry. The paper first summarizes the results of the questionnaire survey. First of all, although not everyone has some understanding of blockchain technology, they have more or less a general understanding of the application of this technology in the medical industry under the influence of the current COVID-19. At the same time, the convenience and accuracy brought by the application of this technology in the medical industry have also been more consistent recognition. However,The main point of disagreement is the privacy and security of the technology in use. Overall, more people choose to believe that the technology can ensure adequate privacy and security (especially in light of the current COVID-19 epidemic). This perception also correlates with age, with surveys showing that older groups tend to be more skeptical about safety. Therefore, the study conclusion is that the application of blockchain technology in the medical industry has been recognized and accepted by medical service recipients to some extent. However, there are still problems of information storage security and privacy that need to be improved. DISCUSSION Based on the above questionnaire and experiment, this research project shows that some Chinese consumers are not yet aware of blockchain, and therefore some oppose or abstain from using it. However, they have more or less a general understanding of the application of this technology in the healthcare industry due to the influence of the current COVID-19. Most of the consumers who are aware of blockchain and have some knowledge about it prefer to use it. Also, that they will support it and its application in their lives in the future as well. First of all, they are against or do not give their opinion to people who do not know about blockchain. No one will immediately believe what they don't know and give them their data and private information. But instead, will consumers who understand blockchain and benefit from it deny something that will bring them convenience and benefits because they benefit from it? Most consumers don't understand the benefits of blockchain, such as privacy, security, tamper-proof, convenience, accuracy, data distribution, and storage, so this paper needs to explain and promote it so they understand it. This study considered both consumers and producers would benefit from this. The pharmaceutical industry is one of the many industries that will offer more protection for their private information when consumers embrace blockchain. Not only is it secure, and their data is not easily tampered with, but it will also make it easier for them to seek medical care. Because hospitals can quickly and accurately find their information and prescribe the correct medication when their information is in the blockchain [10]. In summary, although the application of blockchain technology in the medical industry has been recognized and accepted by the recipients of medical services, there is still room for improvement in information storage security and privacy issues. CONCLUSION Through a questionnaire survey on the application of blockchain technology in the medical industry, this study reveals the attitude of medical industry receivers towards the application of blockchain in the medical industry. The questionnaire survey results show that most of the respondents have a certain basic Advances in Economics, Business and Management Research, volume 203 understanding of blockchain technology and maintain a supportive attitude towards this application due to the convenience brought by blockchain technology. At the same time, the security of information and the protection of privacy has also aroused widespread concerns. The analysis of the results of this questionnaire clarifies the degree of recognition and acceptance of the application of blockchain technology in the medical industry. It reveals various problems existing in the current application. The research topic of this paper is the application of blockchain technology in the medical industry, so the results of this paper analysis are closely related to the topic. The analysis results of this paper can provide a reference for decision-makers of institutions or departments in the medical industry to carry out reforms related to the application of blockchain, especially the improvement of privacy protection and information security. The overall situation of the current COVID-19 has put forward higher and more urgent requirements for the development of blockchain technology in the medical industry. In the process of technological innovation, the recipient of medical services, as an important part of this technology, their opinions and suggestions are important and cannot be ignored. This research topic is just focusing on this point, integrating and analyzing their understanding of this technology, support level, and some suggestions, which can provide a very important reference for the innovation of blockchain technology in the medical industry.
2022-01-07T16:07:27.013Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "fbd48077c1c696d7d5bd234fb51f4639ef4ed203", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125966307.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "02d2aaa2e90074588bace36cfd9bd60ef044cfbe", "s2fieldsofstudy": [ "Medicine", "Business", "Computer Science" ], "extfieldsofstudy": [] }
18300979
pes2o/s2orc
v3-fos-license
Dietary Phytochemicals in Neuroimmunoaging: A New Therapeutic Possibility for Humans? Although several efforts have been made in the search for genetic and epigenetic patterns linked to diseases, a comprehensive explanation of the mechanisms underlying pathological phenotypic plasticity is still far from being clarified. Oxidative stress and inflammation are two of the major triggers of the epigenetic alterations occurring in chronic pathologies, such as neurodegenerative diseases. In fact, over the last decade, remarkable progress has been made to realize that chronic, low-grade inflammation is one of the major risk factor underlying brain aging. Accumulated data strongly suggest that phytochemicals from fruits, vegetables, herbs, and spices may exert relevant immunomodulatory and/or anti-inflammatory activities in the context of brain aging. Starting by the evidence that a common denominator of aging and chronic degenerative diseases is represented by inflammation, and that several dietary phytochemicals are able to potentially interfere with and regulate the normal function of cells, in particular neuronal components, aim of this review is to summarize recent studies on neuroinflammaging processes and proofs indicating that specific phytochemicals may act as positive modulators of neuroinflammatory events. In addition, critical pathways involved in mediating phytochemicals effects on neuroinflammaging were discussed, exploring the real impact of these compounds in preserving brain health before the onset of symptoms leading to inflammatory neurodegeneration and cognitive decline. INTRODUCTION In the last decades the increasing aging with consequent raise in chronic degenerative diseases has led to an augmented investigation of the environmental factors involved in their origin and progression. Although several efforts have been made in the search for genetic and epigenetic patterns linked to diseases, a comprehensive explanation of the mechanisms underlying pathological phenotypic plasticity is still far from being clarified (Babenko et al., 2012). Epigenetic control of the gene expression has been recognized as a key player in producing rapid adaptation to changing environmental conditions both within a single lifespan as well as across multiple generations. These mechanisms are particularly applied to the brain, which is capable of changing readily in response to experience throughout a lifetime (Babenko et al., 2012). Epigenetics is a branch of biology, which studies how changes in gene expression occur without modifications in the DNA sequence (Choudhuri, 2011). Such changes can be induced by environmental factors, and can be highly stable, including those resulting from genetic imprinting, or dynamic, including those associated with memory (Lardenoije et al., 2015). Oxidative stress and inflammation are two of the major triggers of the epigenetic alterations occurring in chronic pathologies, such as neurodegenerative diseases, especially in elderly population, already characterized by modifications of the normal homeostasis of organs and systems. Starting by the evidence that a common denominator of aging and chronic degenerative diseases is represented by inflammation, and that several dietary phytochemicals are able to potentially interfere with and regulate the normal function of cells (Scapagnini et al., 2014), in particular neuronal components, aim of this review is to summarize recent studies on the neuroinflammaging processes indicating that specific phytochemicals may act as positive modulators of neuroinflammatory events. In addition, pathways involved in mediating phytochemicals effects on neuroinflammaging were discussed, exploring the real impact of dietary phytochemicals in preserving brain health before the onset of symptoms leading to inflammatory neurodegeneration and cognitive decline. FEATURES OF NEUROIMMUNOAGING Over the last decade, remarkable progress has been made to realize that chronic, low-grade inflammation is one of the major risk factor underlying brain aging. During their life the cells progressively impair the ability to defend themselves from stress stimuli and, as a consequence, there is an accumulation of oxidative damages in all cell constituents (Corbi et al., 2008;Bianco et al., 2013;Lardenoije et al., 2015;Conti et al., 2015Conti et al., , 2016. Growing evidence suggests that the brain and immune system are intricately connected and crosstalk to maintain homeostasis. Aging is associated with aberrant inflammatory responses in human brains (Lu et al., 2004;Cribbs et al., 2012). Specifically, basal levels of proinflammatory cytokines are elevated with aging (Sierra et al., 2007), whereas anti-inflammatory mediators are reduced (Ye and Johnson, 1999). In addition, other components involved in innate immune responses, such as the complement (C) pathway, the toll-like receptor (TLR) signaling, and the inflammasome activation, are also upregulated as the brain ages (Cribbs et al., 2012;Cho et al., 2015). In fact, during aging the brain shows an imbalance between pro-and anti-inflammatory cytokine levels (Figure 1). It has been demonstrated both in humans and animal models that aging is associated with decreased levels of interleukin 10 (IL10) (Ye and Johnson, 2001), and increased levels of tumor necrosis factor alpha (TNFa) and IL1b in the nervous system (Lukiw, 2004;Streit et al., 2004), as well as IL6 in plasma (Ye and Johnson, 2001;Godbout and Johnson, 2004). In addition, increased levels odtransforming growth factor b1 (TGFb1) mRNA, a key regulatory cytokine, has been observed in the brain of aged mice and rats (Bye et al., 2001). At the same time, several changes induced by an aged microenvironment, such as increased systemic inflammation, increased permeability of the blood-brain barrier (BBB), and degeneration of neurons and other brain cells, could contribute to the production of Radical Oxygen Species (ROS), thus generating oxidative stress. It has been proposed that BBB permeability increases in aged animals (Blau et al., 2012;Enciu et al., 2013), facilitating perhaps infiltration by monocytes releasing mitochondria-generated ROS. According to this hypothesis, an age-related increase in the number of CD11bC and CD45 cells, compatible with infiltrated monocytes, has been reported in the brain of aged rats (Blau et al., 2012). Likewise, expression levels of chemotactic molecules, such as interferon-inducible protein 10 (IIP10) and monocyte chemotactic protein-1 (MCP-1), are increased in the hippocampal region (Blau et al., 2012;Von Bernhardi et al., 2015). With normal aging, the immunophenotype of microglia is characterized by up-regulation of glial activation markers including Major Histocompatibility Complex II (MHC II) and CD11b, a finding reported in several species including human post-mortem tissue, rodent, canine, and non-human primates (Tafti et al., 1996;Sheffield and Berman, 1998). This up-regulation of MHCII occurs also at the mRNA level (Frank et al., 2006). Importantly, MHCII is expressed at very low levels on microglia of younger animals under basal conditions (Perry, 1998), providing a clear baseline to detect aging-related changes in microglia immunophenotype. Increased MHCII could result from aging-induced increases in microglia number, or from increases in permicroglial cell expression. Although only few studies are available, they support the idea of increased permicroglial cell expression, and therefore sensitization (Barrientos et al., 2015). Despite these commonalities, the role of the immune system in aging and neurodegenerative disease remains unclear (Lucin and Wyss-Coray, 2009). Microglia are the resident immune cells of the brain, endowed with numerous receptors capable of detecting physiological disturbances. When neurons are injured as a result of aging or neurodegeneration, microglia become activated via the release of adenosine triphosphate (ATP), neurotransmitters, growth factors or cytokines, ion changes in the local environment, or loss of inhibitor molecules displayed by healthy neurons (Hanisch and Kettenmann, 2007). The increase in expression of multiple TLRs in the aging brain (Letiembre et al., 2007;Berchtold et al., 2008) may generate a hypersensitive state of glia and neurons and thus magnify potential injury. Stimulation of TLRs induces a signaling cascade, culminating in the activation of nuclear factor κB (NFκB) and subsequent transcriptional activation of numerous proinflammatory genes, encoding cytokines, chemokines, complement proteins, enzymes [such as cyclooxygenase 2 (COX-2) and Inducible nitric oxide synthase (iNOS)], adhesion molecules, and immune receptors (Nguyen et al., 2004). Exactly how neuronal TLRs promote neurodegeneration and the identity of their ligands is currently unclear. FIGURE 1 | Phytochemicals effects on Neurooinflammaging. Neuroimmunoinflammaging is characterized by reduced SIRT1 and Nfr2 activity with consequent increased NF-κB activation. The increased NF-κB activation, also trough Tool Like Receptors (TLR), induces in turn raised proinflammatory factors such as TNFa, IL1b, IL6, iNOS. The disequilibrium between anti-(IL10) and pro-inflammatory molecules determines increased inflammation, and a vicious circle is established that sustains neuroinflammaging. The phytochemicals (like curcumin, resveratrol, sulphurane, etc.) inducing increase in Nrf2 and SIRT1 activity could be able to inhibit the NF-κB activation and then to break the vicious circle ending the progression of the brain aging. However, aging results in a significant increase in glial activation, complement factors, inflammatory mediators, and brain atrophy (West et al., 1994;Streit et al., 2008). Microarrays of aged human and mouse brains showed that genes related to cellular stress and inflammation increase with age while genes related to synaptic function/transport, growth factors, and trophic support decrease (Lee et al., 2000). These changes suggest that neurons encounter increased challenges with age but receive reduced support. Neurogenesis also decreases with age, possibly as a result of factors secreted by activated microglia (i.e., IL-6). To date, it is unclear the reason of increased inflammation during aging. However, genetic studies suggest an important role of Deoxyribonucleic Acid (DNA), because DNA bases are particularly vulnerable to oxidative stress damage leading to important inflammatory alterations (Bianco et al., 2013). Also unclear is to what extent aging affects the responsiveness of microglia or their potential to contribute to neuronal loss. Despite morphological and phenotypic changes that indicate microglial activation, it has been proposed that microglia may actually become dysfunctional and enter a senescent state with age (Streit et al., 2008). Such a state may cause microglia to secrete diminished levels of neurotrophic factors and downregulate phagocytic function. This phenomenon, associated with increased secretion of inflammatory mediators, may lead to neuronal loss and inefficient clearance of toxic protein aggregates in neurodegenerative disease (Lucin and Wyss-Coray, 2009). Finally, it should be also highlighted that mounting evidence indicates that epigenetic mechanisms play a significant role in shaping environmental influences on brain and behavior (Kosik et al., 2012). Nrf2 AND SIRTUINS PATHWAY INVOLVED IN NEUROIMMUNOAGING Although definitive mechanisms are still to be elucidated, the pro-inflammatory phenotype of senescent cells, coupled with the up-regulation of the inflammatory response with increasing age, has been found to play a role in the initiation and progression of age-related diseases such as Alzheimer's disease (Cevenini et al., 2013;Patel et al., 2015). A large body of evidence has highlighted a role of class III histone deacetylases, named sirtuins, in neurodegenerative processes Vang et al., 2011;Baur et al., 2012). Sirtuin 1 (SIRT1), the main characterized molecule of sirtuins family, regulates immune responses via NF-κB signaling and in this way also controls the ROS production (Salminen et al., 2013). The NF-κB signaling is a crucial pathway of immune defense system and an inducer of inflammatory responses (Vallabhapurapu and Karin, 2009). The NF-κB system is involved in many housekeeping and survival functions during cellular stress e.g., by controlling apoptosis, proliferation, and energy metabolism (Karin and Lin, 2002;Perkins, 2007;Johnson and Perkins, 2012). Both SIRT1 and oxidative stress are known to be able to regulate NF-κB signaling and are crucially involved in the maintenance of cellular homeostasis (Yeung et al., 2004;Morgan and Liu, 2011). Moreover, several studies demonstrated that NF-κB signaling is activated during aging (Helenius et al., 2001;Csiszar et al., 2008). The crosstalk between oxidative stress and inflammation is a complex process and there are studies reporting that ROS can stimulate inflammation via the activation of inflammasomes and the production of IL-1β and IL-18 cytokines, which subsequently trigger inflammatory responses (Kitazawa et al., 2005;Heneka and O'Banion, 2007;Salminen et al., 2013). Many studies have also demonstrated that SIRT1 is a potent intracellular inhibitor of oxidative stress and inflammatory responses (Rajendran et al., 2011;Salminen et al., 2011). In particular, SIRT1 is a powerful inhibitor of NF-κB signaling and thus it suppresses inflammation (Yeung et al., 2004;Salminen et al., 2008a). Many downstream targets of SIRT1 also repress inflammatory responses, e.g., AMP-activated protein kinase (AMPK) (Salminen et al., 2011) and Forkhead box O (FoxO) factors (Lin et al., 2004), by inhibiting the NF-κB signaling. Yeung et al. (2004) revealed that SIRT1 performs its antinflammatory activity by deacetylating the Lys310 residue of v-rel avian reticuloendotheliosis viral oncogene homolog A/p65 (RelA/p65) component, thus inhibiting the transactivation capacity of the NF-κB complex. Recently Cho et al. (2015) showed that SIRT1 levels in microglia exhibit an age-dependent decline, and microglial SIRT1 deficiency leads to cognitive decline in normal aging. The authors suggested that aginginduced SIRT1 deficiency in microglia could initiate epigenetic alterations on IL-1beta, leading to its enhanced expression that is associated with impairments in memory and related cognitive decline. Moreover, there is a growing body of evidence suggesting that activation of SIRT1 and other sirtuins can protect neurons in experimental models of neurodegenerative disorders (Duan, 2013). In particular Min et al. (2010) demonstrated that tau protein, that stabilizes microtubules, is acetylated and tau acetylation prevents degradation of phosphorylated tau (ptau). Hyperphosphorylation of the tau protein can result in the self-assembly of tangles of paired helical filaments and straight filaments, which are involved in the pathogenesis of Alzheimer's disease, and other tauopathies (Alonso et al., 2001). Deleting SIRT1 enhanced levels of acetylated-tau and pathogenic forms of p-tau, probably by blocking proteasome-mediated degradation. These results indicate that SIRT1 can prevent the formation of neurofibrillary tangles (Min et al., 2010;Lee et al., 2014). In primary cortical cultures, overexpression of SIRT1 in microglia protected against amyloid beta toxicity, most likely by inhibiting NF-κB signaling (Chen et al., 2005). SIRT1 could also protect against cellular senescence by inactivating NF-κB (Rovillain et al., 2011;Tilstra et al., 2012) or deacetylating the FOXO3 transcription factor (Yao et al., 2012). In addition, SIRT1 could also enhance the T helper 2 (Th2) lymphocytes responses in dendritic cells (Legutko et al., 2011). In the Wallerian degeneration slow (Wlds) mouse model, SIRT1 activation protects axons against neuronal injury (Dali-Youcef et al., 2007). Decreasing SIRT1 activity reduces the axonal protection originally observed, whereas SIRT1 activation by resveratrol decreases the axonal degeneration after neuronal injury (Suzuki and Koike, 2007). This suggests that the neuroprotection in the Wild mouse model is achieved by increasing the neuronal nicotinamide adenine dinucleotide (NAD+) reserve and/or SIRT1 activity (Dali-Youcef et al., 2007). Also the inhibition of sirtuin 2 (SIRT2) rescued asynuclein toxicity and modified inclusion morphology in a cellular model of Parkinson's disease and genetic inhibition of SIRT2 via small interfering RNA similarly rescued a-synuclein toxicity (Outeiro et al., 2007). Furthermore, the inhibitors protected against dopaminergic cell death both in vitro and in a Drosophila model of Parkinson's disease, suggesting another link between neurodegeneration and aging (Outeiro et al., 2007). In addition, SIRT1 activation significantly decreases neuronal cell death induced by amyloid-beta peptides through inhibition of NF-κB signaling (Dali-Youcef et al., 2007). In particular SIRT1 deacetylates retinoic acid receptor beta (RARb) and activates a disintegrin and metalloprotease domain (ADAM) 10 transcription, leading to upregulated Amyloid Precursor Protein (APP) processing by a-secretase, resulting in reduced production of Amyloid beta (Aβ) peptide (Donmez et al., 2010). Thanks to these evidences, SIRT1, as well as the other sirtuins, is now considered a promising therapeutic option for neurological syndromes, such as Alzheimer, Parkinson, and Huntington's disease (Donmez et al., 2010;Jeong et al., 2012), and in general for the control and progression of the neuroimmunoaging. Another key molecule involved in neuroimmunoaging is represented by Nuclear factor (erythroid-derived 2)-like 2 (Nrf2). Emerging evidence suggests that Nrf2 may play an important role in the regulation of brain inflammation, and some studies have suggested that Nrf2 has an antagonistic effect with the NF-κB pathway, which is considered a hallmark of inflammation (Liu et al., 2008;Djordjevic et al., 2015). Nrf2 is a member of the Cap'n'Collar family of transcription factors that bind to nuclear factor erythroid derived 2 (NF-E2) binding sites (GCTGAGTCA) that are essential for the regulation of erythroid specific genes. Nrf2 is expressed in a wide range of tissues, many of which are sites of expression for phase 2 detoxification genes (Dinkova-Kostova et al., 2002) and targeted for ubiquitination and proteasomal degradation via binding to a cytosolic repressor protein, Kelch-like ECH associated protein 1 (Keap1) (McMahon et al., 2006). The principle of the Nrf2 system is to keep Nrf2 protein low under normal conditions with the possibility of rapid induction in case of a sudden increase in oxidation status in the cell. This is achieved by constitutive synthesis and degradation of Nrf2 with the possibility of rapid redirection of Nrf2 to the nucleus. (Sandberg et al., 2014). There is now overwhelming amount of experimental evidence that Nrf2 serves as a master regulator of the antioxidants involved in cellular defenses against various electrophiles and oxidants (Kobayashi and Yamamoto, 2006;Calabrese et al., 2008). Indeed new findings connect Nrf2 also to expression of other types of protective proteins such as brain derived neurotrophic factor (BDNF) (Sakata et al., 2012), the anti-apoptotic B-cell lymphoma 2 (BCL-2) (Niture and Jaiswal, 2012), the anti-inflammatory interleukin (IL)-10, the mitochondrial transcription (co)-factors NRF-1 and peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α) (Piantadosi et al., 2011). The relation between Nrf2 and NF-κB is not well characterized but the identification of NF-κB binding sites in the promoter region of the Nrf2 gene suggests cross talk between these two regulators of inflammatory processes (Nair et al., 2008). The NF-κB subunit p65 has been shown to function as a negative regulator of Nrf2 activation either by depriving cAMP response elementbinding protein (CBP) from NRF2 or by recruitment of histone deacetylase 3 (HDAC3), causing local histone hypoacetylation and down-regulation of Nrf2-antioxidant responsive element (Nrf2-ARE) signaling (Liu et al., 2008). Yu et al. (2011) have shown that p65 decreased Nrf2 binding to its cognate DNA sequences and enhanced Nrf2 ubiquitination, then providing direct evidence that the interaction of nuclear factor p65 with Keap1 is critical for NF-κB repressing Nrf2-ARE pathway. Moreover, the N-terminal region of p65 was necessary for both the interaction with Keap1 and its transcriptional suppression activity, and nuclear translocation of Keap1 was augmented by p65. The authors concluded that taken together, these findings suggest that NF-κB signaling inhibits Nrf2-ARE pathway through the interaction of p65 and Keap1 (Yu et al., 2011). Further, activation of NRF2 in response to lipopolysaccharide (LPS) has been suggested to be dependent on the key innate immunityregulating adaptor protein Myeloid differentiation primary response gene 88 (MyD88) (Kim et al., 2011). In particular Kim et al. (2011) demonstrated that treatment of macrophages with LPS activates Nrf2. Interestingly, the authors found that Nrf2 is activated in a MyD88 dependent fashion without the involvement of ROS. These results suggest the possibility that Nrf2 activated by inflammatory stimuli can be a mechanism that contributes to decreasing excessive inflammatory response (Kim et al., 2011). A study demonstrated that Nrf2 knockout mice were hypersensitive to the neuroinflammation induced by LPS, indicative of an increase in microglial cells, and in the inflammation markers iNOS, IL-6, and TNF-α, compared with the hippocampi of wildtype littermates (Innamorato et al., 2008). Activation of NRF2 could also be achieved via the increase in peripheral IGF-1 that enters the brain after exercise (Cotman et al., 2007;Sandberg et al., 2014). Recently Rojo et al. (2010) showed that Nrf2-deficient mice exhibited more astrogliosis and microgliosis. Inflammation markers characteristic of classical microglial activation, COX-2, iNOS, IL-6, and TNF-α were also increased and, at the same time, anti-inflammatory markers attributable to alternative microglial activation, such as FIZZ-1 and IL-4 were decreased. These results were confirmed in microglial cultures, further demonstrating a role of Nrf2 in tuning balance between classical and alternative microglial activation (Rojo et al., 2010). Aging drives a long-lasting sub-ventricular zone impairment at least in part via reduced Nrf2-mediated tolerance to inflammation and oxidative stress associated with dysfunctional astrocyte-microglial dialogue, in turn interrupting key molecular signaling mechanisms finely regulating sub-ventricular cell homeostasis (L'Episcopo et al., 2013). In particular, when "primed" microglia of aged mice become hyperactivated upon a second hit, the generation of highly toxic mediators in the face of impaired antioxidant self-protective neuroprogenitor cell response dramatically inhibits neurogenesis, suggesting that glial age is of critical importance in directing promotion vs. inhibition of neurogenesis. Interestingly, with age, the exaggerated microglial activation can impair an astrocyte's ability to express critical antioxidant, anti-inflammatory, and neurogenic factors, thereby resulting in an overall reduction of glial proneurogenic capacities (L'Episcopo et al., 2013). These processes may disrupt the cross talk between two pivotal pathways in subventricular zone, the NrF2/ Fosfoinositide 3-chinasi/Protein kinase B (Nrf2/PI3-K/Akt) and the Drosophila melanogaster wingless gene/receptor Frizzled/β-catenin (Wnt/Fzd/β-catenin) signaling cascades, involved in cell survival, proliferation, and/or differentiation. The manipulation of these age-related Nrf2 pathways at middle age is associated with significant Dopaminergic neuroprotection (L' Episcopo et al., 2013). A study also demonstrated that direct intrahippocampal gene delivery of Nrf2, by a lentiviral vector, results in a reduction in spatial learning deficits in aged mice (Kanninen et al., 2009). In particular memory improvement in the mice after Nrf2 transduction shifts the balance between soluble and insoluble Aβ toward an insoluble Aβ pool without concomitant change in total brain Aβ burden. Nrf2 gene transfer was associated with reduction in astrocytic but not microglial activation and induction of Nrf2 target gene Heme Oxygenase 1 (HO-1), indicating overall activation of the Nrf2-ARE pathway in hippocampal neurons 6 months after injection (Kanninen et al., 2009). Based on this body of emerging evidence it seems that in many cases the beneficial effects of low doses of phytochemicals rely on their ability to activate the Nrf2/ARE and sirtuins pathways. MECHANISMS OF DIETARY PHYTOCHEMICALS IN NEUROIMMUNOAGING Dietary phytochemicals include a large group of no-nutrients compounds from a wide range of plant-derived foods and chemical classes. Several plant-based extracts and chemicals are supposed to have beneficial effects on human brain function. The potential effect of these molecules is linked to the common ancestry, which has provided some phytochemicals of conserved cellular processes, including the similarities in most pathways for synthesis and breakdown of proteins, nucleic acids, carbohydrates, and lipids (Kennedy and Wightman, 2011). In fact, some molecules that function as neurochemicals within the mammalian central nervous system (CNS) are ubiquitous across all eukaryotes (Kawashima et al., 2007). At a molecular level, signaling molecules and pathways are preserved in both plants and animals (Kushiro et al., 2003). For instance, multiple aspects of cellular and redox signaling are conserved (Dalle-Donne et al., 2009), including similar gene expression in response to cellular stressors, which are regulated by common transcription factors (Scandalios, 2005). The basis for the use of polyphenol-rich nutritional supplements as a modulator of age-related cognitive decline is the age-related increase in oxidative stress (Morris et al., 2006;Craft et al., 2012) and low-grade inflammation. Often the beneficial effects of phytochemicals are supposed to be due to their intrinsic antioxidant and antinflammatory properties (Murugaiyah and Mattson, 2015). At low doses, phytochemicals have beneficial or stimulatory effects on animal cells, whereas in high amounts can be toxic. This is an example of "hormesis" (Mattson, 2008;Lee et al., 2014). Hormetic phytochemicals such as resveratrol, sulforaphane, curcumin, catechins, allicin, and hypericin are reported to activate adaptive stress response signaling pathways that increase cellular resistance to injury and disease (Mattson and Cheng, 2006). Also neuroactive phytochemicals present in commonly consumed fruits, vegetables, and nuts are generally well tolerated (Wöll et al., 2013;Lee et al., 2014). These phytochemicals are hormetic substances because they can be toxic in high amounts, but are beneficial in the lower amounts usually consumed (Mattson, 2015). In this context, although controversial data are available on the capacity of these compounds to cross the BBB and bioavailability continues to be highlighted as a major concern, hormetic dose-response model has important biological and clinical implications, including activation of neuroprotective stress response pathways at low concentrations (Schaffer and Halliwell, 2012;Davinelli et al., 2016). Their effects are represented as a biphasic dose-response curve, with the first phase being a positive/beneficial effect and the second phase with a progressively negative/toxic effect (Calabrese et al., 2007;Mattson et al., 2007). Recent findings suggest that adaptive cellular stress responses to phytochemicals are mediated via some of the same pathways that mediate responses to energy restriction and exercise (Mattson, 2012;Milisav et al., 2012). Commonly consumed phytochemicals are able to induce mild stress in neural cells, enhancing the ability of nervous system to cope with stress, and then promoting optimal function and longevity of the nervous system. As with exercise and energy restriction, intake of neurohormetic phytochemicals typically occurs on intermittent basis, which provides a "recovery period" that allows cells to repair and growth (Mattson, 2015). Examples include pathways that signal via Nrf2, SIRT1, and AMPK (Menendez et al., 2013;Misra et al., 2013). Activation of one or more of these signaling pathways that evolved to defend cells against potentially toxic phytochemicals appears to be a major reason why ingestion of these substances can protect neurons against injury and disease (Calabrese et al., 2008;Murugaiyah and Mattson, 2015). SOME PHYTOCHEMICALS POTENTIALLY USEFUL IN NEUROPROTECTION Several studies indicate that antioxidants, e.g., dietary polyphenols, can inhibit inflammation, and in particular the terpenoids, are able to inhibit NF-κB signaling and thus repress inflammation (Rahman et al., 2006;Salminen et al., 2008b). In this context, there are many mutual interactions and a delicate balance exists between SIRT1 and ROS signaling which provoke context-dependent responses to autophagic flux and inflammation (Salminen et al., 2013). Recent findings suggest that several phytochemicals exhibit biphasic dose responses on cells with low doses activating signaling pathways that result in increased expression of genes encoding survival proteins, as in the case of the Keap1/Nrf2/ARE pathway activated by curcumin and NAD/NADH-sirtuin-1 activated by resveratrol. To underline the role of the dietetic components in modifying cellular mechanisms, recently Morrison et al. (2010) showed as in 20-month old male mice fed either 'western diet' (41% fat), very high fat lard diet (60% fat), or corresponding control diets for 16 weeks, only the high fat lard diet increased age-related oxidative damage and impaired retention in the behavioral test. This selective increase in oxidative damage and cognitive decline was also associated with a decline in Nrf2 levels and activity, suggesting a potential role for decreased antioxidant response. Then the authors suggested that impaired Nrf2 signaling and increased cerebral oxidative stress as mechanisms underlying High Fat Diet-induced declines in cognitive performance in the aged brain (Morrison et al., 2010). Ferulic Acid Ferulic acid (FA) is commonly found in fruits and vegetables such as tomatoes, sweet corn, and rice (Srinivasan et al., 2007). It has been reported that this compound decreases the levels of inflammatory mediators (prostaglandin E2 and TNF-α) (Ou et al., 2003), and iNOS expression and function (Tetsuka et al., 1996). In vivo, long-term administration of FA effectively protects against Aβ toxicity by inhibiting microglial activation (Kim et al., 2004). Kanaski et al. (2002) reported that FA protects against free radical mediated changes in the conformation of synaptosomal membrane proteins. The long-term administration of FA at a dose of 300 µM effectively protects against Aβ toxicity by inhibiting microglial activation in vivo (Kim et al., 2004). Moreover, Sultana et al. (2005) showed that also at lower doses of 10-50 µM significantly protects against Aβ toxicity by modulating oxidative stress directly and by inducing protecting genes in hippocampal cultures, also exerting neuroprotective effects by up-regulation of protective enzymes, such as Hemeoxygenase-1 (HO-1) and heat shock protein 70 (Hsp70) (Scapagnini et al., 2004;Srinivasan et al., 2007) then suggesting a control by Nrf2 and sirtuins on the FA effects. More recently Mori et al. (2013) demonstrated that also orally administration of FA for 6 months improved behavioral impairment, mitigated cerebral amyloidosis, and inhibited APP metabolism by reducing β-site APP cleaving enzyme 1 (BACE1) expression and β-secretase activity in an accelerated mouse model of cerebral amyloidosis. Supporting results from cultured mutant human APP-overexpressing murine neuron-like cells revealed FA dose-dependent reduction of various Aβ species and inhibition of β-secretase cleavage. FA also ameliorated neuroinflammation in including β-amyloid plaque-associated gliosis and expression of the proinflammatory cytokines, TNFα, and IL-1β. Lastly, mRNA expression of three oxidative stress markers [superoxide dismutase 1 (SOD1), catalase (CAT), and GSH-Px 1] was decreased in FA-treated mice, providing support for long-term FA dietary supplementation as a therapeutic strategy (Mori et al., 2013). Green Tea The consumption of green tea has recently attracted much attention in the occidental culture because of its beneficial effects, such as protection of dopaminergic neurons from damage induced by 6-hydroxydopamine in a rat model of Parkinson's disease (Guo et al., 2007); reduction of mutant huntingtin misfolding and neurotoxicity in Huntington's disease models (Ehrnhoefer et al., 2006); direct protection of neurons against Aβ toxicity (Bastianetto et al., 2000); protection against Aβ-induced cognitive impairment in a rat model relevant to Alzheimer's disease (Haque et al., 2008). Moreover, a study by Wu et al. (2006) reports that one of its component, epigallocatechin gallate (EGCG), up-regulates HO-1 expression by activation of the Nrf2-ARE pathway in endothelial cell, conferring resistance against Hydrogen peroxide (H2O2) induced cell death, suggesting a hormetic mechanism of action . It has been demonstrated that EGCG selectively protects cultured rat cerebellar granule neurons from oxidative stress (Schroeder et al., 2009). More recently Obregon et al. have confirmed that oral administration of EGCG promotes cleavage of APP into α-CTF and soluble APP-α (Obregon et al., 2006). These cleavage events are associated with elevated α-secretase cleavage activity and are also positively correlated with activation of ADAM10, a key candidate α-secretase (Rezai-Zadeh et al., 2008), then suggesting a role of sirtuins in mediating EGCG effects (Donmez et al., 2010;Jeong et al., 2012). Blueberry and Strawberry Evidences showed that plant extracts, from mulberry, strawberry, and blueberry, contain antioxidants, which are able to induce the antioxidant defense system and improve memory deterioration in aging animals (Shih et al., 2010). Supplementation of the diet of 19 month-old rats with strawberry, blueberry or spinach extracts for 8 weeks resulted in the reversal of age-related deficits in several neuronal and behavioral parameters (Joseph et al., 1999). Blueberry supplementation prevented learning and memory deficits in a mouse model of Alzheimer's disease (Joseph et al., 2003). In addition, dietary supplementation with blueberry extract increased the survival of dopamine-producing neurons in a model relevant to Parkinson's disease therapy (McGuire et al., 2006). Moreover, blueberries and strawberries counter the deleterious effects of irradiation by reducing oxidative stress and inflammation, thereby improving neuronal signaling, preventing the accumulation of disease-related proteins such as tau in the hippocampus of irradiated rats (Poulose et al., 2014). Indeed Andres-Lacueva et al. (2005) examined whether different classes of polyphenols could be found in brain areas associated with cognitive performance following blueberry (BB) supplementation. Thus, 19 months old F344 rats were fed a control or 2% BB diet for 8-10 weeks and tested in the Morris Water Maze (MWM), a measure of spatial learning and memory. Several anthocyanins were found in the cerebellum, cortex, hippocampus, or striatum of the BB supplemented rats, but not the controls. Correlational analyses revealed a relationship between MWM performance in BB rats and the total number of anthocyanin compounds found in the cortex, suggesting that these compounds may deliver their antioxidant and signaling modifying capabilities centrally. To support and clarify the antioxidants effects of this compound recently Çoban et al. (2015) investigated the consequence of whole fresh BB treatment at different percentages on oxidative stress in age-related brain damage model. The study showed that BB treatments, especially BB at higher percentage reduced malondialdehyde and Protein C levels and acetylcholinesterase activity and elevated glutathione (GSH) levels and GSH-Px activity, diminishing apoptosis and ameliorating histopathological findings in the brain of rats treated with D-galactose (GAL). The authors concluded that BB partially prevented the shift toward an imbalanced prooxidative status and apoptosis together with histopathological amelioration by acting as an antioxidant (radical scavenger) itself in GAL-treated rats (Çoban et al., 2015). It has been reported that long-term treatment with blueberry has also a neuroprotective effect in attenuating cerebral ischemia/reperfusion (I/R) injury. Zhou et al. (2015) showed that 24 h after I/R, pterostilbene (a major component of blueberry) dose-dependently improved neurological function, reduced brain infarct volume, and alleviated brain oedema. The most effective dose was 10 mg/kg; the therapeutic time window was within 1 h after I/R and treatment immediately after reperfusion showed the best protective effect. The protective effect was further confirmed by the results that post-ischemic treatment with pterostilbene (10 mg/kg) significantly improved motor function, alleviated BBB disruption, increased neurons survival and reduced cell apoptosis in cortical penumbra after cerebral I/R. The authors also found that pterostilbene (10 mg/kg) significantly reversed the increased content of malondialdehyde and the decreased activity of superoxide dismutase in the ipsilateral hemisphere, with decrease of the oxidative stress markers 4-hydroxynonenal and 8-hydroxyguanosine positive cells in the cortical penumbra. Then pterostilbene dose-and time-dependently exerts a neuroprotective effect against acute cerebral I/R injury (Zhou et al., 2015) and its antioxidant action is mediated by the increased expression of Nrf2 (Saw et al., 2014). Curcumin Curcumin, the principal curcuminoid and the most active component in turmeric, is a biologically active phytochemical. Several beneficial effects of curcumin for the nervous system have been reported. In an animal model of stroke curcumin treatment protected neurons against ischemic cell death and ameliorated behavioral deficits (Wang et al., 2005). A hormetic mechanism of action of curcumin is suggested from studies showing that levels of expression of the stress response protein HO-1 were increased in cultured hippocampal neurons treated with curcumin (Scapagnini et al., 2006). Moreover, curcumin has been shown to reverse chronic stress-induced impairment of hippocampal neurogenesis and increase expression of BDNF in an animal model of depression (Xu et al., 2007). At non-toxic concentrations, curcumin induces HO-1 expression by activating the Nrf2/ARE pathway both in vitro (Pae et al., 2007) and in vivo (Farombi et al., 2008). Several studies also showed that curcumin interacts with NF-κB, and through this interaction exerts protective function also in the regulation of T-cell-mediated immunity (Kou et al., 2013). Recently González-Reyes et al. (2013) identified curcumin as a neuroprotector against hemin-induced damage in primary cultures of cerebellar granule neurons of rats. Hemin, the oxidized form of heme, is a highly reactive compound that induces cellular injury. Pre-treatment of the neurons with 5-30 µM curcumin increased by 2.3-4.9-fold HO-1 expression and by 5.6-14.3-fold GSH levels. Moreover, 15 µM curcumin attenuated by 55% the increase in ROS production, by 94% the reduction of GSH/glutathione disulfide ratio, and by 49% the cell death induced by hemin. Furthermore, it was found that curcumin was capable of Nrf2 translocation into the nucleus, suggesting that the pre-treatment with curcumin induces Nrf2 and an antioxidant response that may play an important role in the protective effect of this antioxidant against hemin-induced neuronal death (González-Reyes et al., 2013). In rodents and human cells, curcumin-induced HO-1 overexpression was correlated with production of mitochondrial ROS, activation of transcription factors Nrf2 and NF-κB, induction of Mitogen-activated protein kinase (MAPK) p38 and inhibition of phosphatase activity (Andreadi et al., 2006;McNally et al., 2007). Moreover, curcumin is an activator of Nrf2 (Moi et al., 1994) by changing specific highly reactive cysteine residues of Keap1 (Dinkova- Kostova et al., 2002Kostova et al., , 2005, with consequent lost ability of Keap1 to target Nrf2 for degradation, which then undergoes nuclear translocation. By using an Alzheimer transgenic mouse model (Tg2576), Lim et al. (2001) shown that dietary curcumin in vitro, inhibited aggregation as well as disaggregated fibrillary Amyloid beta (Aβ). In vivo studies showed that curcumin injected peripherally into aged mice crossed the BBB and directly bound small β-amyloid species to block aggregation and fibril formation in vitro and in vivo. These data suggest that low dose curcumin effectively disaggregates Aβ as well as prevents fibril and oligomer formation, supporting the rationale for curcumin use in clinical trials (Lim et al., 2001). More recently, Garcia-Alloza et al. (2007) in transgenic APPswe/PS1dE9 mice demonstrated that curcumin, given intravenously for 7 days, crosses the BBB, binds to β-amyloid deposits in the brain and accelerates their rate of clearance (Garcia-Alloza et al., 2007). Curcumin was also demonstrated to exert a neuroprotective effect in rats who underwent ischemia/reperfusion injury and this effect has been related to the direct scavenger effect of curcumin as well as to a curcumin-induced interference with the apoptotic machinery, increase in antioxidant molecules (GSH) and enzymes such as CAT and SOD (Al-Omar et al., 2006;Calabrese et al., 2008). Sulforaphane Sulforaphane (SFN), a phytochemical present in high amounts in cruciferous vegetables such as broccoli, is known to activate the Nrf2-ARE stress response pathway in rodent brains and microvasculature and by this to reduce brain damages in a traumatic brain injury model . Sulforaphane has been reported to protect cultured neurons against oxidative stress (Kraft et al., 2004), and dopaminergic neurons against mitochondrial toxins (Han et al., 2007;Son et al., 2008). This compound administration initiated at 1 h post-cortical impact injury has been shown to improve cognitive function, in particular spatial learning and memory, and to reduce working memory dysfunction (Dash et al., 2009). In a model of neonatal hypoxia-ischemia, pretreatment with SFN increased the expression of Nrf2 and HO-1 in the mouse brain and reduced infarct ratio (Ping et al., 2010). Numerous other non-nutrients contained in food and plants have been ascribed to the list of Nrf2 activators, and among these several food-contained antioxidant polyphenols. One of the most important aspects of current polyphenol research is the focus on the neuroprotective capacity endowed by these molecules that seems to be due mostly to their ability to activate different defensive molecular pathways, instead to involve just their intrinsic antioxidant properties (Scapagnini et al., 2011). In this regard, it has been recently demonstrated the critical role of Nrf2/HO-1 activation by some of these neuroprotective compounds, providing insight into the possible therapeutic significance of a closely related group of polyphenols against neurodegenerative disorders and cognitive decline (Scapagnini et al., 2011). Resveratrol Neuroprotective effects of resveratrol have been reported by several different studies, in particular on beta-amyloid-induced oxidative cell death (Jang and Surh, 2003) and against several different insults on dopaminergic neurons of midbrain slice cultures (Okawara et al., 2007). In particular in cultured rat pheochromocytoma (PC12) cells Resveratrol attenuated Aβ-induced cytotoxicity, apoptotic features, and intracellular ROS accumulation. Moreover, Aβ transiently induced activation of NF-κB was suppressed by resveratrol pretreatment (Jang and Surh, 2003) suggesting a key role of the NF-κB inflammatory pathway in the Aβ deposition and a possible therapeutic function of resveratrol in mediating neuroprotection. Resveratrol protects cortical neurons from oxidative stressinduced injury (Zhuang et al., 2003), and suppress alcoholinduced cognitive deficits and neuronal apoptosis (Tiwari and Chopra, 2013). In addition, resveratrol has been found to reduce the production of IL-1 beta and TNF-alpha induced by LPS or Aβ in the microglia (Capiralla et al., 2012;Zhong et al., 2012). Further studies showed that the powerful neuroprotective effect of resveratrol has also been confirmed in neurodegenerative disorders, such as Parkinson's disease, Alzheimer's disease (Albani et al., 2009), and in traumatic brain injury (Ates et al., 2007;Zhang et al., 2015). Prozorovski et al. (2008) found that the treatment of neural progenitor cells (NPCs) with resveratrol mimicked oxidizing conditions and increased differentiation of NPCs toward astrocytes through a mechanism that requires Sirt1 (Prozorovski et al., 2008). Indeed subtle alterations of the redox state, found in different brain pathologies, regulate the fate of mouse NPCs through SIRT1. Mild oxidation or direct activation of SIRT1 suppressed proliferation of NPCs and directed their differentiation toward the astroglial lineage at the expense of the neuronal lineage, whereas reducing conditions had the opposite effect. Under oxidative conditions in vitro and in vivo, Sirt1 was upregulated in NPCs, bound to the transcription factor Hes1 and subsequently inhibited pro-neuronal Mash1. In response to brain injury, NPCs differentiate preferentially into astrocytes rather than neurons. Excessive astrocyte expansion, known as astrogliosis, can prevent growth of neurons and interfere with proper damage repair. Therefore, the ability to direct differentiation of NPCs may be useful in protecting the brain against inflammatory diseases, such as multiple sclerosis, which involve astrogliosis (Prozorovski et al., 2008). Indeed resveratrol was shown to affect the activity of SIRT1 in vitro on depending to the nature of the substrate for deacetylation (Baur and Sinclair, 2006). It has been reported that the SIRT1 agonist resveratrol protects C. elegans neurons expressing a fragment of the Huntington disease-associated protein huntingtin and mammalian neurons from mutant polyglutamine cytotoxicity in a HdhQ111 knock-in mouse model of Huntington disease (Dali-Youcef et al., 2007). Moreover, resveratrol had no effect on the binding of NF-κB proteins to the DNA, but it blocked the TNF-induced translocation of p65 subunit of NF-κB and reporter gene transcription. Similarly, the activation of c-Jun N-terminal kinases (JNK) and its upstream MAPK are inhibited by resveratrol, which may explain the mechanism of suppression of AP-1 by resveratrol (Rahman et al., 2006). Recently Zhang et al. (2015) investigated the potential role of resveratrol in attenuating hypoxia-induced neurotoxicity via its anti-inflammatory actions through in vitro models of the BV-2 microglial cell line and primary microglia. The authors found that resveratrol significantly inhibited hypoxia-induced microglial activation and reduced subsequent release of proinflammatory factors. In addition, resveratrol inhibited the hypoxia-induced degradation of I kappa B-alpha (IκB-alpha) and phosphorylation of p65 NF-κB protein. Importantly, treating primary cortical neurons with conditioned medium (CM) from hypoxia-stimulated microglia induced neuronal apoptosis, which was reversed by CM co-treated with resveratrol. Taken together, the results of this study suggest that resveratrol exerts neuroprotection against hypoxia-induced neurotoxicity through its anti-inflammatory effects in microglia. These effects were mediated, at least in part, by suppressing the activation of NF-κB, extracellular-signal-regulated kinases (ERK), and JNK/MAPK signaling pathways . Although several studies reported an efficacy of these compounds in animal model and in vitro, few plantbased products have been assessed in methodologically adequate human trials (Kennedy and Wightman, 2011), and clinical experiments have often failed to demonstrate any convincing therapeutic potency of these compounds (Berger et al., 2012). DIETARY PHYTOCHEMICALS ON COGNITIVE PERFORMANCE IN HUMAN STUDIES Accumulated data strongly suggest that phytochemicals from fruits, vegetables, herbs, and spices may exert relevant immunomodulatory and/or anti-inflammatory activities in the context of brain aging. The benefits of these substances for the cognitive health of older adults have been reported in several studies (Davinelli et al., 2015). In a recent review Shukitt-Hale (2012) highlighted the potential benefits of blueberries as a compound to impact age-related changes in neuronal aging. Additionally, Devore et al. (2012) shown that greater selfreported intakes of blueberries and strawberries were associated with slower rates of cognitive decline. Although several evidences point toward the beneficial effects of these substances, limitations of these researches include the use of correlational data as well as the lack of assessment of the bioavailability of these polyphenolic compounds from diets (Rowland et al., 2000). Commenges et al. (2000) demonstrated that the intake of flavonoids in 1367 subjects over 65 years old was inversely associated with the risk of dementia at a 5-years follow-up. Recently Small et al. (2014) conducted a double-blind, placebocontrolled clinical trial using a pill-based nutraceutical (NT-020) that contained blueberry, carnosine, green tea, vitamin D3, and Biovin to evaluate the impact on changes in cognitive functioning. One hundred and five cognitively intact adults aged 65-85 years of age were randomized to receive NT-020 (n = 52) or a placebo (n = 53). Participants were tested with a battery of cognitive performance tests that were classified into six broad domains (episodic memory, processing speed, verbal ability, working memory, executive functioning, and complex speed) at baseline and 2 months later. The results indicated that persons taking NT-020 improved significantly on two measures of processing speed across the 2-month test period compared to persons on the placebo whose performance did not change. The authors concluded that the results were promising and suggest the potential for interventions like these to improve the cognitive health of older adults (Small et al., 2014). Indeed these results have been confirmed by recent evidence by Rabassa et al. (2015). In the context of the Invecchiare in Chianti (InCHIANTI), a cohort study with 3 years of follow-up, the authors assessed the total urinary polyphenol (TUP) and the total dietary polyphenol (TDP) concentrations in 652 individuals without dementia aged 65 and older, and assessed cognition using the Mini-Mental State Examination (MMSE) and Trail-Making Test (TMT) at baseline and after 3 years of follow-up. Higher TUP levels were associated with lower risk of substantial cognitive decline on the MMSE and on the TMT-A, in a logistic regression model adjusted for baseline cognitive score and potential confounding factors. These findings showed that high concentrations of polyphenols were associated with lower risk of substantial cognitive decline in an older population studied over a 3-year period, suggesting a protective effect against cognitive impairment (Rabassa et al., 2015). In a prospective study conducted among Japanese Americans living in the King County of Washington, Dai et al. (2006) found that frequent drinking of fruit and vegetable juices was associated with a substantially decreased risk of Alzheimer's disease, with an inverse association stronger after adjustments for potential confounding factors, and evident in all strata of selected variables. These findings suggest that fruit and vegetable juices may play an important role in delaying the onset of Alzheimer's disease (Dai et al., 2006). Krikorian et al. (2010a,b) investigated the effects of daily consumption of wild blueberry juice in a sample of nine older adults with early memory changes. At 12 weeks, improved paired associate learning and word list recall were observed. In addition, there were trends suggesting reduced depressive symptoms and lower glucose levels. Instead, twelve older adults with memory decline but not dementia were enrolled in a randomized, placebo-controlled, double blind trial with Concord grape juice supplementation for 12 weeks (Butchart et al., 2011). The authors observed significant improvement in a measure of verbal learning and non-significant enhancement of verbal and spatial recall. There was no appreciable effect of the intervention on depressive symptoms and no effect on weight or waist circumference. Then these findings suggested that supplementation with Concord grape juice may enhance cognitive function for older adults with early memory decline. In a more recent study (Devore et al., 2012), performed on 16,010 women aged ≥70 years, greater intakes of blueberries and strawberries were associated with slower rates of cognitive decline after adjusting for multiple potential confounders. Berry intake appeared to delay cognitive aging by up to 2.5 years. Additionally, in further supporting evidence, greater intakes of anthocyanidins and total flavonoids were associated with slower rates of cognitive decline in elder women (Devore et al., 2012). On the other hand Butchart et al. (2011) investigated the same issue but with control for possible confounding factors as prior intelligence quotient (IQ). In a cross-sectional survey of 1091 men and women born in 1936, in which IQ was measured at age 11 years, at the age of 70 years, participants carried out various neuropsychological tests and completed a Food Frequency Questionnaire. Total fruit, citrus fruits, apple, and tea intakes were initially found to be associated with better scores in a variety of cognitive tests, but the associations were no longer statistically significant after adjusting for confounding factors, including childhood IQ, not supporting a role for flavonoids in the prevention of cognitive decline in later life (Butchart et al., 2011). However, in all of these studies, no specific information on long-term dietary habits was available, while, long-term diet is likely to be most relevant for cognitive decline (Devore et al., 2012). An epidemiological study (Ringman et al., 2012) suggested that curcumin, as one of the most prevalent nutritional and medicinal compounds used by the Indian population, is responsible for the reduced (4.4-fold) prevalence of AD in India compared to United States. As seen above, although there are many experimental in vitro and in vivo evidence of the efficacy of curcumin in the prevention of neurodegeneration, at present very few human studies have been performed, which have shown some utility of this compound. Ringman et al. (2012) performed a 24-week randomized, double blind, placebo-controlled study of curcumin with an open-label extension to 48 weeks. Thirty-six persons with mildto-moderate AD were randomized to receive placebo, 2 g/day, or 4 g/day of oral curcumin for 24 weeks. For weeks 24 through 48, subjects that were receiving curcumin continued with the same dose, while subjects previously receiving placebo were randomized in a 1:1 ratio to 2 g/day or 4 g/day. At the end of the study no differences were found between treatment groups in clinical or biomarker efficacy measures (Ringman et al., 2012). Indeed Cox et al. (2015) in a randomized, double blind, placebo-controlled trial examined the acute (1 and 3 h after a single dose), chronic (4 weeks), and acute-on-chronic (1 and 3 h after single dose following chronic treatment) effects of solid lipid curcumin formulation (400 mg) on cognitive function, mood and blood biomarkers in 60 healthy adults aged 60-85. One hour after administration curcumin significantly improved performance on sustained attention and working memory tasks, compared with placebo. Working memory and mood (general fatigue and change in state calmness, contentedness and fatigue induced by psychological stress) were significantly better following chronic treatment. A significant acute-on-chronic treatment effect on alertness and contentedness was also observed (Cox et al., 2015). All together these results highlight the need for further investigation on the potential cognitive benefits of curcumin, especially in elderly, and the importance of the dose and method of administration. Although the plant-derived polyphenol resveratrol has been shown to increase memory performance in primates, also for this compound interventional studies in older humans are lacking. In a study by Kennedy et al. (2010) the effects of oral resveratrol on cognitive performance and localized cerebral blood flow variables in healthy human adults were assessed. In this very interesting randomized, double blind, placebocontrolled, crossover study, 22 healthy adults received placebo and 2 doses (250 and 500 mg) of trans-resveratrol in counterbalanced order on separate days. After a 45-min resting absorption period, the participants performed a selection of cognitive tasks that activate the frontal cortex for an additional Guo et al., 2007 The level of EGCG found in the major organs was found to be ∼1/10 that found in the serum. Most interestingly, this includes the brain, suggesting that EGCG passes through the blood-brain barrier. Resveratrol, with its molecular weight of 228 Da (Amri et al., 2012) and lipid soluble properties, should easily cross the BBB. Inhibits hypoxia-induced degradation of I kappa B-alpha and phosphorylation of p65 NF-κB protein. These effects were mediated by suppressing the activation of NF-κB, extracellular-signal-regulated kinases (ERK) and JNK/MAPK signaling pathways In vitro models of the BV-2 microglial cell line and primary microglia Zhang et al., 2015 Frontiers in Pharmacology | www.frontiersin.org 36 min. Cerebral blood flow and hemodynamic, as indexed by concentration changes in oxygenated and deoxygenated hemoglobin, were assessed in the frontal cortex throughout the post-treatment period with the use of near-infrared spectroscopy. The presence of resveratrol and its conjugates in plasma was confirmed by HPLC after the same doses in a separate cohort (n = 9). Resveratrol administration resulted in dose-dependent increases in cerebral blood flow during task performance. There was also an increase in deoxyhaemoglobin after both doses of resveratrol, which suggested enhanced oxygen extraction that became apparent toward the end of the 45-min absorption phase and was sustained throughout task performance. Cognitive function was not affected. Resveratrol metabolites were present in plasma throughout the cognitive task period, suggesting that single doses of orally administered resveratrol can modulate cerebral blood flow variables (Kennedy et al., 2010). Witte et al. (2014) tested whether supplementation of resveratrol would enhance memory performance in older adults and addressed potential mechanisms underlying this effect. Twenty-three healthy overweight older individuals treated for 26 weeks with 200 mg/d resveratrol were compared to 23 participants that received placebo. Before and after the intervention/control period, subjects underwent memory tasks and neuroimaging to assess volume, microstructure, and functional connectivity (FC) of the hippocampus, a key region implicated in memory functions. In addition, anthropometry, glucose and lipid metabolism, inflammation, neurotrophic factors, and vascular parameters were assayed. The authors observed a significant effect of resveratrol on retention of words over 30 min compared with placebo. In addition, resveratrol led to significant increases in hippocampal FC, and the increases in FC between the left posterior hippocampus and the medial prefrontal cortex correlated with increases in retention scores. Then these finding could offer the basis for novel strategies to maintain brain health during aging (Witte et al., 2014). CONCLUSIONS Despite the translational gap between basic and clinical research, the current understanding of the molecular interactions between phytochemicals, immune function, and inflammatory response could help in designing effective nutritional strategies to delay brain aging and improve cognitive function. Although, as described, many studies demonstrate the efficacy in vitro and in vivo of phytochemicals (Table 1) in the prevention and treatment of cognitive disorders, even few evidence of their efficacy are available in humans, and especially still significant differences in the protocols used, dosages and in the different way of administration. Therefore, it seems that these results do not allow to finalize, which is the real efficacy of these compounds to prevent and to prevent and delay neuroinflammation associated with aging. Further research, mainly conducted with randomized controlled trials, should be performed in humans to determine the real role that phytochemicals can play in the prevention and treatment of neuroinflammaging. AUTHOR CONTRIBUTIONS GC contributed substantially to conception, drafting the article, and final approval; VC contributed substantially to revision for important intellectual content and final approval of the version to be published; SD made substantial contributions to revising the article; GS made substantial contributions to revising the article; AF and NF contributed substantially to revision for important intellectual content and final approval of the version to be published. All the authors gave final approval of the version to be published.
2017-05-04T22:20:41.802Z
2016-10-13T00:00:00.000
{ "year": 2016, "sha1": "3f95f4fb0f4ab30c798fa49e7f1a0a7953fc9833", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2016.00364/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f95f4fb0f4ab30c798fa49e7f1a0a7953fc9833", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
226114187
pes2o/s2orc
v3-fos-license
Generalizations of the Ruzsa-Szemer\'edi and rainbow Tur\'an problems for cliques Considering a natural generalization of the Ruzsa-Szemer\'edi problem, we prove that for any fixed positive integers $r,s$ with $r<s$, there are graphs on $n$ vertices containing $n^{r}e^{-O(\sqrt{\log{n}})}=n^{r-o(1)}$ copies of $K_s$ such that any $K_r$ is contained in at most one $K_s$. We also give bounds for the generalized rainbow Tur\'an problem $\operatorname{ex}(n, H,$rainbow-$F)$ when $F$ is complete. In particular, we answer a question of Gerbner, M\'esz\'aros, Methuku and Palmer, showing that there are properly edge-coloured graphs on $n$ vertices with $n^{r-1-o(1)}$ copies of $K_r$ such that no $K_r$ is rainbow. Introduction The famous Ruzsa-Szemerédi or (6, 3)-problem is to determine how many edges there can be in a 3-uniform hypergraph on n vertices if no six vertices span three or more edges. This rather specific-sounding problem turns out to have several equivalent formulations and bounds in both directions have had many applications. It is not difficult to prove an upper bound of O(n 2 ): one first observes that if two edges have two vertices in common, then neither of them can intersect any other edges, and after removing all such pairs of edges one is left with a linear hypergraph, for which the bound is trivial. Brown, Erdős and Sós [13] gave a construction achieving Ω(n 3/2 ) edges and asked whether the maximum is o(n 2 ). The argument sketched in the previous paragraph shows that this question is equivalent to asking whether a graph on n vertices such that no edge is contained in more than one triangle must contain o(n 2 ) triangles. A positive answer to this question was given by Ruzsa and Szemerédi [12], who obtained a bound of O(n 2 / log * n) with the help of Szemerédi's regularity lemma. They also gave a construction showing that the number of triangles can be as large as n 2 e −O( √ log n) = n 2−o (1) , so the exponent in their upper bound cannot be improved. One of the applications they gave of their upper bound was an alternative proof of Roth's theorem. Indeed, let A be a subset of {1, . . . , N } that contains no arithmetic progression of length 3. Define a tripartite graph G with vertex classes X = {1, 2, . . . , N }, Y = {1, 2, . . . , 2N } and Z = {1, 2, . . . , 3N }, where if x ∈ X, y ∈ Y and z ∈ Z, then xy is an edge if and only if y − x ∈ A, yz is an edge if and only if z − y ∈ A and xz is an edge if and only if (z − x)/2 ∈ A. Note that these are the edges of the triangles with vertices belonging to triples of the form (x, x + a, x + 2a) with x ∈ X and a ∈ A. If xyz is a triangle in this graph, then a = y − x, b = z − y, c = (z − x)/2 satisfy a, b, c ∈ A and a + b = 2c, which gives us an arithmetic progression of length 3 in A unless y − x = z − y. Thus, the only triangles are the 'degenerate' ones of the form (x, x + a, x + 2a), which implies that each edge is contained in at most one triangle. Therefore, the number of triangles is o(n 2 ) (where n = 6N ). We also have that for each a ∈ A there are N triangles of the form (x, x + a, x + 2a), so |A| = o(N ). As Ruzsa and Szemerédi also observed, this argument can be turned round: it tells us that if A has density α, then there is a graph with 6N vertices and αN 2 triangles such that each edge is contained in at most one triangle. Since Behrend proved [6] that there exists a subset A of {1, . . . , N } of size N e −O( √ log N ) that does not contain an arithmetic progression of length 3, this gives the lower bound mentioned above. Several related questions have been studied, as well as applications and generalizations of the Ruzsa-Szemerédi problem: see for example [4,2]. A natural generalization that we believe has not been considered is the following generalized Turán problem. Question 1.1. Let r and s be positive integers with 1 ≤ r < s. Let G be a graph on n vertices such that any of its subgraphs isomorphic to K r is contained in at most one subgraph isomorphic to K s . What is the largest number of copies of K s that G can contain? The Ruzsa-Szemerédi problem is the case r = 2, s = 3 of Question 1.1, and the answer is trivially Θ(n) if r = 1. One can easily deduce from the graph removal lemma an upper bound of o(n r ) when r ≥ 2. In the case r = 2, the construction for the lower bound can be generalized (for example, by using h-sum-free sets from [3]) to get a lower bound of n 2 e −O( √ log n) . However, there is no obvious way of generalizing the algebraic construction for r ≥ 3. We shall present a geometric construction instead, in order to prove the following result, which is the first of the two main results of this paper. Theorem 1.2. For each 1 ≤ r < s and positive integer n there is a graph on n vertices with n r e −O( √ log n) = n r−o(1) copies of K s such that every K r is contained in at most one K s . We shall also use a modification of our construction to answer a question about rainbow colourings. Given an edge-colouring of a graph G, we say that a subgraph H is rainbow if all of its edges have different colours. We denote by ex * (n, H) the maximal number of edges that a graph on n vertices can contain if it can be properly edge-coloured (that is, no two edges of the same colour meet at a vertex) in such a way that it contains no rainbow copy of H. The rainbow Turán problem (i.e., the problem of estimating ex * (n, H)) was introduced by Keevash, Mubayi, Sudakov and Verstraëte [11], and was studied for several different families of graphs H, such as complete bipartite graphs [11], even cycles [11,7] and paths [10,8]. Gerbner, Mészáros, Methuku and Palmer [9] considered the following generalized rainbow Turán problem (analogous to a generalization of the usual Turán problem introduced by Alon and Shikhelman [5]). Given two graphs H and F , let ex(n, H, rainbow-F ) denote the maximal number of copies of H that a properly edge-coloured graph on n vertices can contain if it has no rainbow copy of F . Note that ex * (n, H) is the special case ex(n, K 2 , rainbow-H). The authors of [9] focused on the case H = F and obtained several results, for example when H is a path, cycle or a tree, and also gave some general bounds. One of their concluding questions was the following. Question 1.3 (Gerbner, Mészáros, Methuku and Palmer [9]). What is the order of magnitude of ex(n, K r , rainbow-K r ) for r ≥ 4? For fixed r, a straightforward double-counting argument shows that if H has r vertices, then ex(n, H, rainbow-H) = O(n r−1 ). Indeed, if G is a graph with n vertices that contains no rainbow copy of H, then every copy of H contains two edges of the same colour. But the number of such pairs of edges is at most n 2 n−2 2 = O(n 3 ), since there are at most n−2 2 edges with the same colour as any given edge, and each such pair can be extended to at most r!n r−4 copies of H. The authors above improved this bound to o(n r−1 ), and gave an example that shows that ex(n, K r , rainbow-K r ) = Ω(n r−2 ). They also asked whether there is a graph H for which the exponent r − 1 in the upper bound is sharp. Our next result shows that H = K r is such a graph. Note that a triangle is always rainbow in a proper edge-colouring, so we trivially have ex(n, K r , rainbow-K r ) = 0 for r < 4. In fact, our method can be used to prove the following more general result. Theorem 1.5. Let r ≥ 4, let H be a graph, and let H have a proper edge-colouring with no rainbow K r . Suppose that for each vertex v of H there is a p v ∈ R m , and for each colour κ in the colouring there is a non-zero vector z κ such that for every edge vw of colour κ, z κ is a linear combination of p v and p w with non-zero coefficients. Then ex(n, H, rainbow-K r ) ≥ n m 0 −o (1) , where m 0 is the dimension of the subspace of R m spanned by the points p v . It is easy to see that Theorem 1.4 is a special case of Theorem 1.5, but Theorem 1.5 also allows us to determine the behaviour of ex(n, H, rainbow-K r ) for several other natural choices of H. We give some examples in Section 5. Theorem 1.5 is 'almost equivalent' to the following, slightly weakened, alternative version. Theorem 1.5 . Let r ≥ 4, let H be a graph, and let c be a proper edge-colouring of H without a rainbow K r . Suppose that for each vertex v ∈ V (H) we have a vector p v ∈ R m−1 , and for each colour κ of c the lines through the pairs p v , p w with c(vw) = κ are either all parallel, or all go through the same point and that point is different from p v , p w unless p v = p w . Assume that no (m − 2)-dimensional affine subspace contains all the points p v . Then ex(n, H, rainbow-K r ) ≥ n m−o (1) . It is easy to see that Theorem 1.5 is equivalent to the weakened version of Theorem 1.5 where we make the additional assumption that each p v is non-zero. Indeed, given a configuration of points p v as in Theorem 1.5 (with m = m 0 ), we can project it from the origin to an appropriate affine (m − 1)-dimensional subspace not going through the origin to get a configuration as in Theorem 1.5 . Conversely, a configuration of points p v as in Theorem 1.5 gives a configuration as in Theorem 1.5 by taking the points p v × {1} ∈ R m . The idea of the construction, and a preliminary lemma We now briefly describe the construction used in our proof of Theorem 1.2. For simplicity, we focus on the case r = 2, s = 3, i.e., the Ruzsa-Szemerédi problem. Consider the d-dimensional sphere S d = {x ∈ R d+1 : x = 1}. (We will choose d to be about √ log n.) Join two points of the sphere by an edge if the angle between the corresponding vectors is between 2π/3 − δ and 2π/3 + δ, where δ is some appropriately chosen small number (roughly e − √ log n ). Then there are 'few' triangles containing any given edge, since if xy is an edge then any point z such that xyz is a triangle is restricted to lie in a small neighbourhood around the point −(x + y). However, there are 'many' edges, since the edge-neighbourhood of a point is a set of points around a codimension-1 surface, which is much larger then the neighbourhood of a single point. Choosing the constants appropriately, we can achieve that if we pick n random points then any two of them form an edge with probability n −o (1) , and any three of them form a triangle with probability n −1−o (1) . Then any edge is expected to be in n −o(1) triangles and there are n 2−o(1) edges. After some modification, we get a graph with n 2−o(1) triangles in which any edge extends to at most one triangle. The general construction is quite similar. We want to define the edges in such a way that knowing the position of any r of the vertices of a K s restricts the remaining s − r vertices to small neighbourhoods around certain points, but knowing the position of i points with i < r only restricts the remaining points to a neighbourhood of a codimension-i surface. For example, when (r, s) = (3, 4), we can define our graph by joining two points if the angle between the corresponding vectors is close to the angle given by two vertices of a regular tetrahedron (centred at the origin). In fact, our construction and the construction of Ruzsa and Szemerédi based on the Behrend set are more similar than they might at first appear, which also explains why they give similar bounds (namely n 2 e −O( √ log n) for the case r = 2, s = 3). Behrend's construction [6] of a large set with no arithmetic progression of length 3 starts by observing that for any positive integers k, d there is some m such that the grid {1, . . . , k} d intersects the sphere {x ∈ R d : x 2 = m} in a set A consisting of at least k d /(dk 2 ) points. This set A has no arithmetic progression of length 3. (In Behrend's construction, this is transformed into a subset of Z using an appropriate map, but this is unnecessary for our purposes.) Repeating the construction from Section 1, we define a tripartite . . , 3k} d , and edges given by the edges of the triangles ( Explicitly, for x ∈ X, y ∈ Y, z ∈ Z, we join x and y if x − y = m 1/2 (and y i ≥ x i for all i), we join y and z if z − y = m 1/2 (and z i ≥ y i ), and we join x and z if x − z = 2m 1/2 (and z i ≥ x i ). This gives the same phenomenon as our construction: the neighbourhood of a point x is given by a codimension-1 condition, but the joint neighbourhood of two points is a single point, since y must be the midpoint of x and z. We conclude this section with the following technical fact, whose proof we include for completeness. Given unit vectors v, w, we write ∠(v, w) for the angle between v and w -that is, for cos −1 ( v, w ). Lemma 2.1. There exist constants 0 < α < B such that the following holds. Let d be a positive integer, let 0 < ρ ≤ 2 and let v ∈ S d . Let X ρ = {w ∈ S d : v − w < ρ}. Let µ denote the usual probability measure on S d . Then Furthermore, for any −1 < ξ < 1 there exists β > 0 such that for every positive integer d, every Proof. Using the usual spherical coordinate system, we see that for 0 (1) But we have θ ≥ sin θ ≥ 2 π θ for 0 ≤ θ ≤ π/2. Thus, t 0 sin d−1 θ dθ is between d t d and 1 d t d for all 0 ≤ t ≤ π/2 (for some constant 0 < c 1 < 1). Using this bound for both the numerator and the denominator in (1), we deduce that The first claim follows. Choosing some sufficiently small β, the second claim follows. The generalized Ruzsa-Szemerédi problem In this section we prove the first of our main results, Theorem 1.2. In the case r = 2, s = 3, the construction is based, as we saw in Section 2, on the observation that if we wish to find three vectors in S d = {x ∈ R d+1 : x = 1} in such a way that the angle between any two of them is 120 • , and if we choose the vertices one by one, then there are d degrees of freedom for the first vertex and d − 1 for the second, but the third is then uniquely determined. This gives us an example of a 'continuous graph' with 'many' edges, such that each edge is in exactly one triangle, and a suitable perturbation and discretization of this graph gives us a finite graph with n 2−o(1) triangles such that each edge belongs to at most one triangle. To generalize this to arbitrary (r, s) we need to find a configuration of s unit vectors (where by 'configuration' we mean an s × s symmetric matrix that specifies the angles, or equivalently inner products, between the unit vectors) with the property that if we choose the points of the configuration one by one, then for i ≤ r the i th point can be chosen with d + 1 − i degrees of freedom, but from the (r + 1) st point onwards all points are uniquely determined. It turns out that all we have to do is choose an arbitrary collection of s points p 1 , . . . , p s in general position from the sphere S r−1 and take the angles ∠(p i , p j ). To see that this works, suppose we we wish to choose x 1 , . . . , x s ∈ S d one by one in such a way that x i , x j = p i , p j for every i, j. Suppose that we have chosen x 1 , . . . , x r and let V be the r-dimensional subspace that they generate. Let u r+1 be the orthogonal projection of x r+1 to V . Then u r+1 , x i = x r+1 , x i for each i ≤ r, and u r+1 ∈ V , so u r+1 is uniquely determined. Furthermore, since the angles p i , p j are equal to the angles x i , x j when i, j ≤ r and to the angles x i , u r+1 when i ≤ r, j = r + 1, and p r+1 is a unit vector, it must be that u r+1 is a unit vector, which implies that x r+1 = u r+1 . Since this argument made no use of the ordering of the vectors, it follows that any r vectors in a configuration determine the rest, as claimed. We shall now use this observation as a guide for constructing a finite graph with many copies of K s such that each K r is contained in at most one K s . As above, pick s 'reference' points p 1 , . . . , p s in general position on the sphere S r−1 . Since for any set B ⊆ {1, . . . , s} of size r the points p b (b ∈ B) form a basis of R r , we may write, for any a, For any c > 0 and positive integers N, d we define an s-partite random graph G N,d,c as follows. (The graph will also depend on r, s, p 1 , . . . , p s , but for readability we drop these dependencies from the notation.) Consider the usual probability measure on the d-sphere S d . Pick, independently and uniformly at random, sN points x a,i (1 ≤ a ≤ s, 1 ≤ i ≤ N ) on S d : these points form the vertex set. Join two points x a,i and We also define a graph G N,d,c as follows. Let M 0 be the maximum among all values of |λ B,a,b | and λ 2 B,a,b , and let M = 2(r +1) This graph is designed to be finite and to have the property that any copy of K s must be close to a configuration with angles determined by the points p 1 , . . . , p s . The vertex deletions are there to ensure that the vertices are reasonably well separated. This will imply that no K r is contained in more than one K s , since once r vertices of a K s are chosen, the remaining vertices are constrained to lie in small neighbourhoods. Lemma 3.1. The graph G N,d,c has the property that any of its subgraphs isomorphic to K r is contained in at most one subgraph isomorphic to K s (for any choices of r, s, p 1 , . . . , p s , N, d, c). Proof. Let x a 1 ,i i , . . . , x ar,ir be points that form a K r . Then necessarily all a t are distinct. Suppose that we have two extensions H 1 , H 2 of this K r to a K s . Then both H 1 and H 2 intersect each class V a in exactly one point. We now show that for each a this point must be the same for H 1 and H 2 , which will imply the lemma. Suppose that By the definition of G N,d,c , we must have x = y. To prove Theorem 1.2, it suffices to show that the expected number of copies of K s in G N,d,c is at least N r e −O( √ log N ) for suitable choices of d and c. For this purpose we shall use the following technical lemma. For later convenience (in Section 4), we state it in a slightly more general form than required here, to allow the possibility that r = s and the possibility that p 1 , . . . , p s are not in general position (but still span R r ). Lemma 3.2. Let 1 ≤ r ≤ s be positive integers and let p 1 , . . . , p s be points on S r−1 such that p 1 , . . . , p r form a basis of R r . Then there exist constants α > 0 and h such that for any d ≥ r and 0 < c < 1 the probability that a set {x a : 1 ≤ a ≤ s} of random unit vectors (chosen independently and uniformly) on We may think of the conclusion of Lemma 3.2 as follows. The dominant (smallest) factor in the probability above is the factor c d(s−r)/2 . The probability should be close to this because if we imagine placing the s points one by one and we have already picked x 1 , . . . , x i joined to each other, then • if i < r, then x i+1 is restricted to a neighbourhood of a codimension-i surface, so with reasonably large probability (comparable to c i ) it is connected to all previous vertices; • if i ≥ r, then the linear dependencies between the points restrict x i+1 to be in a ball of radius about c 1/2 around a certain point, which has measure about c d/2 (which is much smaller than c r ). The proof of Lemma 3.2 is given in an appendix. Proof of Theorem 1.2. By Lemma 2.1, there are constants c 0 , B, C such that if c < c 0 then the probability that a given vertex x a,i is removed from G N,d,c when forming G N,d,c is at most Here B > 0 is an absolute constant and the constants C, c 0 > 0 depend on r, s, p 1 , . . . , p s only. Moreover, the event 'x a,i is removed' is independent of any event of the form 'x 1,i 1 , . . . , x s,is form a K s in G N,d,c '. Using Lemma 3.2, we deduce that the probability that x 1,i i , . . . , x s,is is contained in G N,d,c and forms a K s is at least (1−sN C d c d/2 )α d c d(s−r)/2+h (where α, h depend on r, s, p 1 , . . . , p s only). So the expected number of copies of K s in G N,d,c is at least for some constants η > 0 and E not depending on N, d. , and c < c 0 when N is sufficiently large. The result follows, as G N,d,c has at most N s vertices. Note that our proof in fact also gives the correct (and trivial) lower bound Θ(n) in the case r = 1, since if r = 1 then h = 0 so we may choose d to be a constant and get Θ(N ) in (2). Generalized rainbow Turán numbers for complete graphs We now turn to the proofs of our results about generalized rainbow Turán numbers (Theorems 1.4 and 1.5). First we recall a general result of Gerbner, Mészáros, Methuku and Palmer [9], which can be proved using the graph removal lemma. In particular, we know that ex(n, K r , rainbow-K r ) = o(n r−1 ). We would like to match this with a lower bound of the form n r−1−o(1) . Before we prove such a bound, let us briefly discuss the ideas that underlie the proof. It is easy to show that a lower bound ex(n, K 4 , rainbow-K 4 ) ≥ n 3−o(1) would imply that ex(n, K r , rainbow-K r ) ≥ n r−1−o(1) for all r ≥ 4, so it suffices to consider the case r = 4. However, when r = 4 and G is a properly edge-coloured graph with no rainbow K 4 , then every triangle of G is contained in at most three copies of K 4 . (Indeed, if the vertices of the K 3 are x, y, z, then the only way that adding a further vertex w can lead to a non-rainbow K 4 is if wx has the same colour as yz, wy has the same colour as xz or wz has the same colour as xy. But since the edge-colouring is proper, we cannot find more than one w such that the same one of these three events occurs.) So it is natural to expect that our construction for Theorem 1.2 is relevant here. To see how a similar construction gives the desired result, it is helpful, as earlier, to look at a simpler continuous example that serves as a guide to the construction. Consider the graph where the vertex set is S d and two unit vectors v, w are joined if and only if v, w = −1/3 (the angle between vectors that go through the origin and two distinct vertices of a regular tetrahedron). Then any K 4 in this graph must be given by the vertices of a regular tetrahedron. We colour an edge by the line that joins the origin to the midpoint of that edge. This is a proper colouring with the property that opposite edges have the same colour, so each K 4 is 3-coloured in this colouring. The construction we are about to describe is a suitable perturbation and discretization of this one. For the discretized graph, we will again have 'near-regular' tetrahedra forming K 4 s. To ensure that each copy of K 4 is still rainbow, we shall have to modify the colouring slightly. We shall take only certain 'allowed lines' as colours, and we shall colour an edge by the allowed line that is closest to the line through the midpoint (if that line is not very far -otherwise we delete the edge). We need to choose the allowed lines in such a way that no two allowed lines are close (so that near-regular tetrahedra are still 3-coloured), but a large proportion of lines are close to an allowed line (so that not too many edges are deleted). This can be achieved using the following lemma. Proof. Take a maximal set of points satisfying the condition above. Then the balls of radius 3c 1 around the points ±q 1 , . . . , ±q L cover the entire sphere. But any such ball covers a proportion of surface area at most (Bc 1 ) d for some constant B (by Lemma 2.1). Therefore 2L(Bc 1 ) d ≥ 1, which gives the result. One can prove Theorem 1.4 using the method described above. However, the proof naturally yields the more general Theorem 1.5 (which is restated below), so that is what we shall do. Essentially, we can prove a lower bound of n m−o(1) for a graph H whenever we can draw H in R m in such a way that for each colour there is a line through the origin meeting (the line of) each edge of that colour, and the vertices of the graph span R m . Theorem 1.5. Let r ≥ 4, let H be a graph, and let H have a proper edge-colouring with no rainbow K r . Suppose that for each vertex v of H there is a p v ∈ R m , and for each colour κ in the colouring there is a non-zero vector z κ such that for every edge vw of colour κ, z κ is a linear combination of p v and p w with non-zero coefficients. Then ex(n, H, rainbow-K r ) ≥ n m 0 −o (1) , where m 0 is the dimension of the subspace of R m spanned by the points p v . Proof. Passing to a subspace, we may assume that m = m 0 and {p v : v ∈ V (H)} spans R m . Furthermore, by rescaling we may assume that each z κ and each non-zero p v has unit length. For each c > 0 and any two positive integers N, d, we define a (random) graph F N,d,c as follows. The vertex set of F N,d,c has |H| parts labelled by the vertices of H. If v ∈ V 0 then there is a single point x v,1 = 0 in the part labelled by v. If v ∈ V 1 , then we pick (uniformly and independently at random) N points x v,1 , . . . , x v,N on S d : these will be the vertices in the part labelled by v. We join two vertices x v,a and x w,b by an edge if and only if vw ∈ E(H) and | x v,a , x w,b − p v , p w | < c. By assumption, we know that for each edge vw of colour κ there exist λ κ,v , λ κ,w non-zero real coefficients such that z κ = λ κ,v p v + λ κ,w p w . Let λ be the minimum and The exact values of the constants are not particularly important -they were chosen so that the graph described below will be properly coloured with no rainbow K r . That is, we could replace 12M 2 0 and 2/λ by other sufficiently large constants.) Let q 1 , . . . , q L be points on S d with L ≥ (δ/c 1 ) d such that q i − q j ≥ 3c 1 for all i = j. Here δ is some positive (absolute) constant, and the existence of such a set follows from Lemma 4.2. Also, pick independently and uniformly at random a rotation R κ ∈ SO(d + 1) for each colour κ used in the edge-colouring of H. The probability measure we use on SO(d + 1) is the usual (Haar) measure, so for any q ∈ S d the points R κ q are independently and uniformly distributed on S d . We think of the points R κ q l (l = 1, . . . , L) as the allowed colours for the edges x v,i x w,j when vw ∈ E(H) has colour κ (and we take different rotations for different colours to have independence). We form an edge-coloured graph F N,d,c from F N,d,c as follows. For any edge x v,i x w,j of F N,d,c , we perform the following modification. Let κ be the colour of vw in E(H), and let λ κ,v , λ κ,w = 0 be as before, so that z κ = λ κ,v p v + λ κ,w p w . • If there is some l with λ κ,v x v,i + λ κ,w x w,j − R κ q l < c 1 , then we colour the edge x v,i x w,j with colour (κ, l). Note that such an l must be unique since R κ q l − R κ q l ≥ 3c 1 if l = l. • Otherwise we delete the edge x v,i x w,j . Proof. Suppose that x v,i x w,j and x v,i x w ,j are both edges with colour (κ, l). Then vw and vw both have colour κ in E(H), thus w = w . Also, But then j = j by the definition of F N,d,c . So the edge-colouring of F N,d,c is indeed proper. Claim 2. There is no rainbow copy of K r in F N,d,c . Proof. Suppose that the vertices x v 1 ,i 1 , . . . , x vr,ir form a K r in F N,d,c . Then v 1 , . . . , v r form a K r in H. This K r is not rainbow (by assumption). By symmetry, we may assume that the edges v 1 v 2 and v 3 v 4 both have colour κ. Write x a for x va,ia and λ a for λ κ,va for a = 1, 2, 3, 4. Then we have (recalling that M 0 = max κ ,v |λ κ ,v |) But if x 1 x 2 has colour (κ, l) and x 3 x 4 has colour (κ, l ), then It follows that l = l and hence the K r with vertices x v 1 ,i 1 , . . . , x vr,ir is not rainbow. Proof. Pick arbitrary vertices x v,iv in the classes (with i v = 1 if v ∈ V 0 and 1 ≤ i v ≤ N otherwise). We consider the probability that they form a copy of H in F N,d,c . Write x v for x v,iv . Let > 0 be a small constant to be specified later. By Lemma 3.2, we have for some constants α > 0 and h. Let v ∈ V 1 . By Lemma 2.1, the probability that x v is removed when we form F N,d,c is at Finally, for each colour κ in the colouring of E(H), pick an edge v κ w κ of that colour in H. Write y κ = λ κ,vκ x vκ + λ κ,wκ x wκ and y κ = yκ yκ . Note that y κ = 0 with probability 1, since all λ κ,v are non-zero and at least one of p vκ and p wκ is non-zero. For each κ, if is sufficiently small then by Lemma 2.1 we have for some constants η, η 1 > 0. Observe that the events in (3), (4) and (5) (for all κ) are independent. It follows that P[the events in (3), (4), and, for all κ, where γ is some constant depending on (but not on N, d, c). We show that these events together imply that the x v form a copy of H, if is sufficiently small. The only property that we need to check is that no edge is removed when F N,d,c is formed out of F N,d,c . Consider then an edge uu of H. Let κ be its colour and write v = v κ , w = w κ , y = y κ , y = y κ , λ v = λ κ,v , λ w = λ κ,w , λ u = λ κ,u , and λ u = λ κ,u . We have Furthermore, if we write y = λ u x u + λ u x u , then It follows that This is indeed less than c 1 = (12M 2 0 c) 1/2 if is sufficiently small. Choosing appropriately, (6) gives that the expected number of copies of H in F N,d,c is at least Deduction of Theorem 1.4 from Theorem 1.5. Given a complete graph K r on vertex set {1, . . . , r}, we can properly edge-colour it by giving the edges 12 and 34 the same colour κ, and giving arbitrary different colours to the remaining edges. Pick r − 1 linearly independent points p 2 , p 3 , . . . , p r in R r−1 , and let p 1 = p 2 + p 3 + p 4 . Let z κ = p 3 + p 4 = p 1 − p 2 and let z κ = p i + p j when ij is an edge of colour κ = κ. Theorem 1.5 gives that ex(n, K r , rainbow-K r ) ≥ n r−1−o(1) , and we have a matching upper bound by Proposition 4.1. Some applications of Theorem 1.5 We have already seen that Theorem 1.5 can be used to answer the question of Gerbner, Mészáros, Methuku and Palmer about the order of magnitude of ex(n, K r , rainbow-K r ). In this section we give some other examples of applications of the theorem. To show that our lower bounds are sharp, we shall use a simple proposition to give matching upper bounds. This will require the following definition. Given a graph H and a proper edgecolouring c of H, we say that a subset V 0 ⊆ V (H) is a c-spanning set if there is an ordering v 1 , . . . , v k of the vertices in V (H)\V 0 such that for all i there are some u, u , w ∈ V 0 ∪{v 1 , . . . , v i−1 } such that uu ∈ E(H), v i w ∈ E(H) and c(uu ) = c(v i w). In other words, we can add the remaining vertices to V 0 one by one in a way that new vertices are joined to some vertex in the set by a colour already used. Proof. Let G be a graph on n vertices and let κ be a proper edge-colouring of G without a rainbow copy of F . Let G contain M copies of H. Then we can partition the vertices into classes Note that any x with c x = c is determined by (x v ) v∈V 0 , since the edge-colouring is proper. But there are O(n r ) choices for (x v ) v∈V 0 , hence M = O(n r ). Now assume that r < |V (H)| and that for every proper edge-colouring c of H without a rainbow F and every edge e of H there is a c -spanning set of size at most r that contains e. By the graph removal lemma and the first part of our proposition, we can remove o(n 2 ) edges from G so that the new graph G contains no copy of H. So it suffices to show that each edge appeared in at most O(n r−2 ) tuples x with c x = c. Given an edge e = y v y w with y v ∈ X v , y w ∈ X w , vw ∈ E(H) we can pick in H a c-spanning set V 0,e of size at most r containing vw. Then any x with c x = c and x v = y v , x w = y w is determined by (x u ) u∈V 0 \{v,w} , which gives the result. Now we give some sample applications of Theorem 1.5 and Proposition 5.1. We shall give two illustrations, but it is quite easy to generate additional examples. Complete graphs Perhaps the most natural extension of Question 1.3 is to determine the behaviour of the function ex(n, K r , rainbow-K s ). Note that trivially ex(n, K r , rainbow-K s ) = Θ(n r ) when s > r (by taking a complete r-partite graph), and we have seen that ex(n, K s , rainbow-K s ) = n s−1−o(1) (when s ≥ 4). We also have ex(n, K r , rainbow-K s ) = 0 whenever r ≥ r s for some integer r s depending on s. Indeed, if we have a K r with no rainbow copy of K s , and the largest rainbow subgraph has order t ≤ s, then any of the remaining (r − t) vertices must be joined to this K t by one of the t 2 colours appearing in the K t . But each such colour appears at most once at each vertex, giving r = O(s 3 ). In fact, Alon, Lefmann and Rödl showed [1] that r s = Θ(s 3 / log s). However, the question is non-trivial for s < r < r s . First note that ex(n, K r , rainbow-K s ) = o(n s−1 ) whenever r ≥ s by Proposition 5.1 (since any maximal rainbow subgraph is a c-spanning set). The simplest case for the lower bound is (r, s) = (5,4). In this case Theorem 1.5 gives a matching lower bound n 3−o (1) . Indeed, take an arbitrary proper edge-colouring of K 5 with no rainbow K 4 , and take points p 1 , . . . , p 5 in general position in R 3 . The existence of appropriate values of z κ follows from the fact that any four of the p i are linearly dependent (but any three are independent), and each colour is used at most twice. It is easy to deduce that ex(n, K s+1 , rainbow-K s ) = n s−1−o(1) for all s ≥ 4. When s = 4 then r s = 7 (since any triangle is in at most one K 4 ), leaving the case (r, s) = (6, 4). Unfortunately, in this case Theorem 1.5 does not give a lower bound of n 3−o (1) . (To see this, observe that to get such a bound the corresponding points p v would all have to be non-zero. Then we can use the alternative formulation Theorem 1.5 to see that we would have to be able to draw a properly edge-coloured K 6 in the plane such that there is no rainbow K 4 and lines of edges of the same colour are either all parallel or go through the same point. Applying an appropriate projection and affine transformation, we may assume that we have two colour classes where the edges are all parallel, and these two parallel directions are perpendicular. This leaves essentially two cases to be checked, and neither of them yields an appropriate configuration.) However, we can still deduce a lower bound of ex(n, K 6 , rainbow-K 4 ) ≥ n 12/5−o(1) , as sketched below. We can take 6 points p 0 = 0 and p a = e 2πia/5 (for a = 1, . . . , 5), that is, the vertices of a regular pentagon together with its centre. We define a colouring c as follows. Give parallel lines between vertices of the pentagon the same colour, and also give the same colour to the edge incident at the centre which is perpendicular to these lines (see Figure 1). This gives a proper edge-colouring of K 6 and corresponding points in 2 dimensions for which the conditions of Theorem 1.5 are satisfied, giving a lower bound of n 2−o (1) . (The point z κ is chosen to be p a when p 0 p a has colour κ.) This can be improved to n 12/5−o(1) by a product argument as follows. Looking at the construction, we see that our graph G is 6-partite with classes V 0 , . . . , V 5 , at most n vertices, and a proper edge-colouring κ such that the following hold. • There are (at least) n 2−o(1) copies of K 6 in G. • There is a 5-colouring c of the edges of K 6 (on vertex set {0, . . . , 5}) with no rainbow K 4 such that whenever gives an isomorphism of colourings between the restrictions of c and κ to the appropriate fourvertex graphs (i.e., κ(v i j v i l ) = κ(v i j v i l ) if and only if c(i j i j ) = c(i j i l )). Moreover, this 5-colouring c has the property that for all i, j ∈ {1, . . . , 6} there is a permutation of the vertices {0, . . . , 5} which is an automorphism of colourings and maps i to j. (Indeed, we can take rotations of the pentagon when i, j = 0, and we can take the permutation (01)(34) when i = 0, j = 1.) We construct a new graph as follows. For each i ∈ {0, . . . , 5}, pick a permutation π i of {0, . . . , 5} which gives a colouring automorphism of c and sends i to 0. Define a 6-partite graph G i obtained from G by permuting the vertex classes: G i has classes V i 0 , . . . , V i 5 given by V i a = V π i (a) and same edge set as G. Let G be the product of these 6-partite graphs, that is, it is 6partite with vertex classes W a = V 0 a × V 1 a × · · · × V 5 a , and two vertices (v 0 , . . . , v 5 ) ∈ W a and (w 0 , . . . , w 5 ) ∈ W b are joined by an edge if v i w i ∈ E(G) for all i. Moreover, colour such an edge by colour (κ(v 0 w 0 ), . . . , κ(v 5 w 5 )). It is easy to check that the colouring is proper, G contains no rainbow K 4 , G has at most n 5 vertices in each class, and G contains at least n 12−o(1) copies of K 6 , giving the bound stated. This leaves some open questions about ex(n, K r , rainbow-K s ). It would be interesting to determine its order of magnitude for (r, s) = (6, 4), or the magnitude for other pairs with s < r < r s . King's graphs Given positive integers k, l ≥ 2, write H k,l for the graph with vertex set {1, . . . , k} × {1, . . . , l} where (a, b) and (a , b ) are joined by an edge if and only if they are distinct and |a−a |, |b−b | ≤ 1. In other words, H k,l is the strong product of a path with k points and a path with l points, sometimes called the k × l king's graph. We can use our results to show that ex(n, H k,l , rainbow-K 4 ) = n k+l−1−o (1) . First consider the upper bound. It is easy to see that any sequence of vertices p 1 , . . . , p k+l−1 is a c-spanning set (for all proper edge-colourings c of H k,l without a rainbow K 4 ) if either of the following statements holds. (Indeed, this follows from the fact that we can add the other vertices one by one, creating a new copy of K 4 in our set in each step.) Since any edge is contained in such a sequence, Proposition 5.1 gives ex(n, H k,l , rainbow-K 4 ) = o(n k+l−1 ). For the lower bound, consider an edge-colouring c of H k,l with c((a, b)(a + 1, b)) = a, where the other edges are given arbitrary distinct colours. This gives a proper edge-colouring of H k,l with no rainbow K 4 . For each vertex (a, b) of H, define p a,b ∈ R k+l to be the vector with i th coordinate For each 1 ≤ a ≤ k − 1 we let z a ∈ R k+l be the vector with all entries zero except the a th and (a + 1) th coordinates which are 1, and for each other colour κ used in the colouring of H k,l we take z κ = p v + p w , where vw is the unique edge of colour κ. Then we have p a,b + p a+1,b = z a , so the conditions of Theorem 1.5 are satisfied. The dimension of the subspace of R k+l spanned by the vectors p a,b is at least k + l − 1, since p 1,l , p 1,l−1 , . . . , p 1,1 , p 2,1 , p 3,1 , . . . , p k,1 are linearly independent. We get the required lower bound n k+l−1−o (1) . So it suffices to show that if and δ are sufficiently small (depending on p 1 , . . . , p r+1 only), then the event above implies that | x a , x r+1 − p a , p r+1 | < c for all 2 ≤ a ≤ r. But we have Lemma A.2. Let r be a positive integer and let p 1 , . . . , p r be linearly independent points in S r−1 . Then there are exist real numbers α 1 > 0 and h 1 such that whenever d ≥ r is a positive integer and 0 < c < 1 then the probability that r points x 1 , . . . , x r chosen independently and uniformly at random on S d satisfy | x i , x j − p i , p j | < c for all 1 ≤ i, j ≤ r is at least α d 1 c h 1 . Proof. The proof is essentially the same as for the previous lemma. We prove the statement by induction on r. The case r = 1 is trivial. Now assume that r ≥ 2 and that the statement holds for smaller values of r. By symmetry, we may assume that x 1 = (0, 0, . . . , 0, 1) ∈ S d and p 1 = (0, 0, . . . , 0, 1) ∈ S r−1 . Define p a and x a for a ≥ 2 as in the proof of Lemma A.1. Let > 0 be some small constant to be determined later. By induction, we have P[| x a , x b − p a , p b | < c for all 2 ≤ a, b ≤ r] ≥ α d−1 1 ( c) h 1 for some α 1 > 0 and h 1 (where the constants depend only on p 1 , . . . , p r+1 ). It follows that whenever b, b > r then Choosing a sufficiently small > 0 gives the result.
2020-03-06T02:00:50.706Z
2020-03-05T00:00:00.000
{ "year": 2020, "sha1": "3fc1d8a7092bfabcc1ab18dad0235633c99712ba", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/3ACA5B930E52081F0717810A454B4DB6/S0963548320000589a.pdf/div-class-title-generalizations-of-the-ruzsa-szemeredi-and-rainbow-turan-problems-for-cliques-div.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "b11622a9069594aa8c68e6f79527435cce03569b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
81149838
pes2o/s2orc
v3-fos-license
Autosegmentation of the rectum on megavoltage image guidance scans Abstract Autosegmentation of image guidance (IG) scans is crucial for streamlining and optimising delivered dose calculation in radiotherapy. By accounting for interfraction motion, daily delivered dose can be accumulated and incorporated into automated systems for adaptive radiotherapy. Autosegmentation of IG scans is challenging due to poorer image quality than typical planning kilovoltage computed tomography (kVCT) systems, and the resulting reduction of soft tissue contrast in regions such as the pelvis makes organ boundaries less distinguishable. Current autosegmentation solutions generally involve propagation of planning contours to the IG scan by deformable image registration (DIR). Here, we present a novel approach for primary autosegmentation of the rectum on megavoltage IG scans acquired during prostate radiotherapy, based on the Chan-Vese algorithm. Pre-processing steps such as Hounsfield unit/intensity scaling, identifying search regions, dealing with air, and handling the prostate, are detailed. Post-processing features include identification of implausible contours (nominally those affected by muscle or air), 3D self-checking, smoothing, and interpolation. In cases where the algorithm struggles, the best estimate on a given slice may revert to the propagated kVCT rectal contour. Algorithm parameters were optimised systematically for a training cohort of 26 scans, and tested on a validation cohort of 30 scans, from 10 patients. Manual intervention was not required. Comparing Chan-Vese autocontours with contours manually segmented by an experienced clinical oncologist achieved a mean Dice Similarity Coefficient of 0.78 (SE < 0.011). This was comparable with DIR methods for kVCT and CBCT published in the literature. The autosegmentation system was developed within the VoxTox Research Programme for accumulation of delivered dose to the rectum in prostate radiotherapy, but may have applicability to further anatomical sites and imaging modalities. Introduction Automated segmentation of the anatomy, or autosegmentation, is crucial for optimising the efficacy of adaptive radiotherapy (ART) (Jaffray et al 2010, Godley et al 2013, Thor et al 2013, Whitfield et al 2013. Reactive adaptations to a patient's radiation treatment plan may be necessary if anatomical changes occur during treatment resulting in deviations from the intended planned dose. Image guided radiotherapy (IGRT) facilitates visualisation of the patient's anatomy throughout the course of treatment and offers a potential platform for assessing dosimetric implications. However, the expanse of information contained within IGRT images is not currently being realised to its full potential, and this is partly due to the dependency on manual contouring. The development of robust and automated approaches to segmentation has been identified as a key aspect in the pursuit of delivered dose calculation for ART (Jaffray et al 2010), as manual contouring of daily IG scans is unfeasible. Not only would this introduce an impracticable excess to the clinical workload (Gambacorta et al 2013, Scaife et al 2014), but additional training would be required due to the poorer soft tissue definition of IG scans when compared with the more familiar kilovoltage (kV) treatment planning scans (Whitfield et al 2013). The reduction in image quality is due to the lower contrast and signal-to-noise ratio associated with cone-beam computed tomography (CBCT) and megavoltage CT (MVCT) imaging (Chao et al 2008, Jackowiak et al 2015. Automated solutions present the opportunity to expedite and standardise anatomical segmentation of IG scans The purpose of this work is to develop an autosegmentation tool to identify the rectum on MVCT IG scans for patients undergoing prostate IGRT. This review of the literature focusses on segmentation tools relevant to this anatomy. The motivation is that daily segmentation could facilitate quantitative tracking of interfraction rectal motion and deformation throughout the course of treatment (Scaife et al 2014). Deviations in rectal positioning from the planning CT scan have been shown to induce differences between the intended planned dose, and that actually received consequently being limited in sample size. One approach attempting to address this limitation was to implement statistical simulations for quantifying motion-inclusive delivered dose (Thor et al 2013). A common recommendation of these studies was the development of robust systems for autosegmentation of the rectum, as a crucial component towards achieving automated ART for prostate radiotherapy. Autosegmentation of rectal contours has previously been addressed for standard kVCT imaging. Evaluations of selected commercial algorithms by Geraghty et al 2013, and La Macchia et al 2012, found that systems struggled to identify the rectum on the planning kVCT without manual intervention. Despite the superior image quality of kVCT, the prostate-rectum interface was affected by poor or no contrast (particularly at the superior and inferior rectal boundaries) which led to greater inter-observer error. It follows that these difficulties would worsen for poorer quality IG scans. However, Zambrano et al 2013, found no significant differences in rectum registration errors between kVCT-kVCT and CBCT-kVCT using an inhouse featurelet-based model, though concluded that their DIR accuracy was not yet sufficient for clinical contour propagation. Gao et al 2006, proposed an intensity modification method (IMM) based on an inroom diagnostic kVCT-on-rails system, which introduced artificial gas with adaptive smoothing. The IMM improved upon rigid transformation and DIR alone. However, standard kVCT imaging is not often available for IG, and in our study we sought to exploit images already routinely acquired during treatment. Autosegmentation techniques developed for standard kVCT may not be transferrable to lower-quality IG scans (Whitfield et al 2013). Alternative approaches have been proposed for CBCT, the most common IG system since being fitted as standard to modern gantry-based linear accelerators. Commercial systems are beginning to support DIR of IG scans (Brock et al 2017), including the implementation of advanced hybrid methods rather than intensity-based approaches (Takayama et al 2017). Several research groups have investigated independent solutions for autosegmentation of the rectum. Chao et al 2008, describe a narrow shell warping technique to map the rectal contour via b-spline DIR from planning kVCT to CBCT, achieving a mean error of 2 mm. This complemented the methods of Xie et al 2008, who applied scale invariance feature transformation and thin plate spline transformation to a set of control points surrounding the rectum, resulting in over 90% accordance between manually segmented and DIR mapped rectum. Chen et al 2009 reported similar results using a modified Demons algorithm based on CBCT greyscale. Thor et al (2011Thor et al ( , 2013 found the modified DIR Demons algorithm system struggled with large rectal deformations, resulting in only 20% of propagated rectal contours being classified as good or acceptable. As such, translation of these tools into fully automated ART has not yet been achieved in clinical practice. Here we present a novel method that has been developed to automatically identify the rectum on daily MVCT scans acquired for patients undergoing IGRT to the prostate using TomoTherapy® (Accuray, Sunnyvale, CA). The basis of the contouring is the Chan-Vese algorithm (Chan and Vese 2001), implemented in 2D within the MATLAB coding environment (MathWorks®, Natick, MA). As such, the difficulties previously described for using DIR to identify the rectum are avoided. Full details are provided, including the use of prior knowledge, rigid registration for setup correction, image windowing, and identification of poor contours. The algorithm was developed on training data, and validated on test scans, before integration into the VoxTox research programme (Burnet et al 2017). We demonstrate that IG scans have further use than routine positional verification by extracting quantitative information in the form of anatomical contours from these images. No additional exposures were required to obtain the contours, as IG was already included in the patient pathway. Contrary to the methods discussed above, our approach performs primary segmentation rather than contour propagation, which addresses the challenges associated with the magnitude of shape change and intensity variation observed in the rectum. Clinical imaging details The VoxTox research programme is an observational study investigating the link between delivered radiation dose and toxicity (Burnet et al 2017). All patients were treated with TomoTherapy® (Accuray, Sunnyvale CA), with daily MVCT image guidance scans acquired immediately prior to treatment for the purposes of online positional verification. The Vox-Tox study received approval from the National Research Ethics Service (NRES) Committee East of England (13/EE/0008) in February 2013 and is part of the UK Clinical Research Network Study Portfolio (UK CRN ID 13716). An experienced clinician [JES] manually delineated the rectum on 56 MVCT IG scans from 10 prostate cancer patients (approximately 560 slices). These contours were taken as the gold standard when evaluating the accuracy of the autosegmentation. On a subset of 6 scans from the same patients, the rectum was independently delineated by 8 oncologists, including JES (Scaife 2015, Burnet et al 2017). The median Jaccard Conformity Index, JCI (Jaccard 1901), of JES relative to the other observers was 0.83, giving a measure of the inter-observer variability. Twenty-six scans were used to train the autosegmentation algorithm, and 30 test scans were used for validation of the generated autocontours. Test scans were distinct from training scans, and autocontours were visually reviewed by JES. Imaging specifications for the kVCT were: 272×272 pixels per slice, pixel size 1.953 mm, slice thickness 3 mm. Scan length included the full extent of the rectum, from rectosigmoid junction to the most inferior slice containing both ischial tuberosities (Scaife et al 2014). MVCT specifications were: 512×512 pixels per slice, pixel size 0.754 mm, slice thickness 6 mm. The field of view for MVCT imaging was limited to typically 8-12 slices according to local protocols to minimise additional dose and time for prostate IGRT (Bates et al 2013), so only a proportion of the rectum was imaged. Figure 1 shows a flow diagram summary of the algorithm for rectal contour detection. The best estimate of the rectal contour is taken from either: (i) a region with air, (ii) the kVCT planning contour for the muscle-associated region (either as-is or modified where air is present in the planning scan), (iii) the autosegmentation result (either from the initial pass or using the smoothed shape for a revised starting contour), or (iv) an interpolated contour. All steps are described in the following sections. It is important to note that identification of the best choice of contour is intrinsic to the algorithm, and does not require manual intervention. Pre-processing The following pre-processing steps are applied to MVCT scans to optimise autosegmentation of the rectum. First, a rigid registration is performed to align the daily image with the kVCT scan. The shifts and rotations of this registration replicate the couch positional adjustments applied by the treatment radiographers on set, and are obtained from TomoTherapy archives (Romanchikova et al 2018). Once registered, a median filter of width 5 pixels is applied to reduce noise, image intensities are rescaled to improve contrast, and any arising complexities due to air pockets are addressed. These steps are detailed below. Rescaling hounsfield units To enhance contrast between tissue, air, and bone, the MVCT Hounsfield Units (HU) are re-scaled to intensity values between 0 and 1, as illustrated in figure 2. Rescaling parameters were selected based on clinically optimal windowing parameters. The contrast between the rectum and surrounding material is improved by assigning the rectal wall and contents an intensity approaching 1, and surrounding tissue an intensity approaching 0. In the 'critical range' found for rectal contents between −10 HU and 100 HU (derived from a set of examined scans), pixels are rescaled and assigned an intensity value between 0 and 1. Pixels between 30 HU and 60 HU are assigned an intensity of Rescaling from Hounsfield Units (HU) to scaled intensity. Pixels less than −130 HU are identified as rectal gas and are rescaled to an intensity value of 1 to be included as rectal content. Pixels between −10 HU and 100 HU are in the 'critical range' identified as rectal contents, and are rescaled to intensity values between 0 and 1. 1, with linear ramps up to these values as shown in figure 2. The linear ramp function is a simple method for applying extra weighting to material identified as lying within the rectum, ramping off as it becomes less certain whether material should be included in the rectum. Pixel values greater than 100 HU are assigned an intensity of 0. Pixels values less than −130 HU are assumed to be gas pockets within the rectal contour so are assigned an intensity value similar to rectal matter to aid the autosegmentation process. The rectal gas threshold of −130 HU was determined empirically and differs from the standard air value of −1000 HU due to traces of solid/liquid matter in the air pockets, and partial volume effects. Larger gaseous regions are treated as a special case and are discussed below. Search region To increase the robustness and efficiency of the algorithm, a search region is defined on the MVCT image by expanding around the original location of the rectum, identified using prior knowledge of the kVCT planning scan rectal contour following rigid registration. The area of expansion of the region of interest (ROI) is based on the rectum's maximum estimated displacements, obtained from a consideration of rectal contours defined manually by several clinicians (Scaife et al 2014). Values of the expansion on the MVCT scan are taken as 38 mm (50 pixels) anteriorly, 15 mm (20 pixels) posteriorly, and 30 mm (40 pixels) left and right. In addition, a posterior limit of the ROI is defined by the location of the spine, if present and identifiable on a given slice (using a thresholding approach). Dealing with air The presence of air, or rectal gas, in the scan provides a useful marker of the rectum, but is also a potential source of confusion to an automated algorithm. For each MVCT image slice, the largest region of connected air pixels, as determined following intensity scaling, is found using the MATLAB regionprops function. For regions of air spanning approximately 85 to 340 mm 2 (an area of 150 to 600 pixels), the largest connected region is identified, with any resulting 'holes' filled in. The region is enlarged by 6 mm (8 pixels) to allow for surrounding rectal wall. Intensity re-scaling serves to ensure that smaller gas regions less than 85 mm 2 in area tend to be included within the rectal contour on applying the autosegmentation algorithm. Regions identified as rectal gas spanning over 340 mm 2 are explicitly included within the rectal contour by simply defining the rectal contour as this area plus a margin to account for the rectal wall. In addition, some smoothing of the contour is applied to give a realistic solution. Figure 3 illustrates two cases for dealing with smaller (a and b), and larger (c and d) air regions. Autocontours derived from the air regions are shown in the original scans, figures 3(a) and (c), and are shown alongside the clinician-defined contours for comparison. Some spuriously detected 'air regions' are disregarded if the centre of the air region does not lie within the location of the original rectal planning scan contour. In some cases, the kVCT planning contours are propagated to determine the best estimate of a particular MVCT slice, as detailed below. To account for potential changes in the MVCT rectal contour due to air, the area of air in the kVCT scan is evaluated using the above approach. The kVCT rectal contour is then reduced (using the MATLAB erosion function) by the difference between the kVCT-and MVCT-determined air areas, to produce the best estimate for the MVCT rectal contour for that slice. Dealing with the prostate Because the MVCT IG scans are used for target localisation during treatment, the assumption is made that the location of the prostate is consistently positioned between scans. Since the prostate and rectum do not overlap, pixels on the MVCT scan included within the original kVCT prostate contour are avoided by the rectal autosegmentation system. These pixels are assigned an intensity value of 0, so that they do not fall within the expected intensity range of the rectum, effectively biasing the autosegmentation algorithm to exclude these pixels from the rectal contour. Figure 3(d) illustrates this approach. In addition, for the purposes of describing rectal position as a function of slice number, a common landmark is identified from the kVCT prostate contour. The reference MVCT slice, at which the rectal origin is defined for plotting, is the slice containing the most anterior coordinate of the prostate contour. Contouring algorithm The basic contouring algorithm used is a 2D version of the Chan-Vese algorithm (Chan and Vese 2001). A key determinant in the effectiveness of the algorithm is the use of a good starting point. Identification of contour starting point In an early iteration, the starting point of the rectal contour was identified by scanning for appropriate features, using no a priori knowledge, but this was found to be unreliable. The more robust approach adopted here uses the rectal contour manually outlined on the kVCT planning scan as the starting point. Shrinking the original kVCT contour slightly (using erosion with a 3×3 structure) allows a 'bias' parameter in the autosegmentation algorithm to control the subsequent expansion of the contour. An improvement to this starting pseudo-contour was implemented by considering shifts of up to 15 mm (20 pixels) in the location of the starting contour, and choosing the starting location with the highest correlation between the shifted contour mask and the MVCT slice being considered. This identified bright regions of the same shape as the kVCT rectal contour within 15 mm (20 pixels) of the starting contour, and accounted for any slice misalignment. Autosegmentation algorithm The 2D Chan-Vese algorithm used (Chan and Vese 2001) was implemented as a standard MATLAB function, activecontour. Two parameters were critical to the contouring operation: (i) a smoothing parameter, governing the smoothness of the final contour, and (ii) a contraction bias parameter giving the weighting assigned to the area of the contour. Increasingly negative values of contraction parameter encourage expansion of the fitted contour. The values of these parameters were investigated systematically. Post-processing Cases were detected where the autosegmentation algorithm did not produce reasonable contours. Postprocessing algorithms were therefore developed to identify slices where autosegmentation was poor, and to replace these with an improved estimate of the rectal contour. Autosegmentation contours abutting the edge of the ROI (i.e. the expanded area around the supposed position of the rectum selected for analysis) are rejected as poor contours. Other criteria used to identify erroneous contours are discussed below. Implausibly large contours and finding the muscleassociated region In the lower third of the rectum, the reduced image contrast between the surrounding soft-tissue musculature makes it difficult to distinguish the rectal contour, particularly on lower quality IG scans. In this situation, the Chan-Vese algorithm tends to overcontour. This is illustrated in figure 4, where the autosegmented contour is displayed alongside the clinician-delineated rectal contour. Areas of poor contrast between the rectum and adjacent organs can lead to similarly large and erroneous contours. A threshold value for the contoured area was therefore implemented to identify erroneously large contours. The value of the threshold was identified by considering the relationship between contour area and accuracy of the corresponding autosegmented contour. Accuracy was characterised using the JCI, defined for two contours (in this case comparing autosegmented against clinician contoured) as the intersection area divided by the union area of the two contours. A value of one corresponds to identical contours, and values below 0.5 are relatively poor. Figure 5 shows the relationship between JCI, comparing auto and manual contouring, and the area of the autosegmented region, after subtraction of air. Many of the large contours correspond to the lower third of the rectum where the autosegmentation is systematically over-estimated. A threshold contour area of 1420 mm 2 (2500 pixels) effectively separates poor quality over-contoured slices from more accurate contours with a higher JCI. The observation that large errors systematically occur in the muscle region is used to identify the extent of the muscle-associated region in the lower third of the rectum. The top of this muscle-associated region is chosen as the most superior slice with an over-large contour area, but not beyond the 6th slice from the bottom of the MVCT scan. In some cases, slices inferior to this critical slice do not have an overlarge contour, but are nevertheless identified as being in the muscle-associated region. In the event of no such slice being found, a default of the second-most inferior slice is chosen as the end of the muscle-associated region. As a default, contours in this muscle-associated region are taken from the kVCT planning scan, which were manually delineated by the clinician. Two exceptions to this occur when air is present. Where a significant region of air is detected in the MVCT slice, the corresponding air region is used directly as the rectal contour, as previously discussed. Where a significant region of air is detected in a kVCT slice, but there is no air in the corresponding MVCT slice, the original kVCT rectal contour is reduced by an amount corresponding to the air region to produce a best estimate MVCT rectal contour in the absence of air. Smoothing and interpolation in 3D Having identified the 'best estimate' contours on each slice of the MVCT scan, the three dimensional (3D) structure is assessed to determine whether errors have occurred in the initial autosegmentation. A smoothing interpolation scheme is used to produce a smooth 3D rectal surface from the MVCT contours. This is used to identify, and improve on, erroneous slices. Slices already identified as having a poor contour due to abutting the edge of the search region or with an excessively large contour (whilst outside the muscleassociated lower rectal region) are omitted when calculating this smoothed shape. Contours for each slice are represented in a polar coordinate system where r is the distance from the centre and θ gives the angular position. Contours are interpolated onto 100 values of θ, evenly spaced around the circumference and the origin of each slice is taken as the centroid. Therefore, the full scan can be represented in a r-θ-z coordinate system, where z locates the slice position in the cranio-caudal direction. The z-origin is taken as the reference MVCT slice, identified from the kVCT prostate contour as described previously. In this way, the radius r is expressed as a function of regularly gridded values of θ and z, facilitating further analysis. A smooth function is fitted to the radius profile (r) in both the circumferential (θ) and cranio-caudal (z) directions to obtain a new set of values of r corresponding to a smoothed shape. These are then converted back to Cartesian coordinates in each slice. Erroneous slices are identified when the JCI between the smoothed and evaluated autosegmented rectal contours falls below a threshold of 0.5. A second segmentation iteration is then performed on these slices, using the smoothed contour as a starting point. If the second-iteration Chan-Vese contour produces a JCI of greater than 0.5 with the smoothed contour, it is used. If the Chan-Vese contour produces a JCI of less than or equal to 0.5, the interpolated contour taken from the smoothed shape is used instead. If the most superior or most inferior slices are affected, autocontours are replaced by the original kVCT contour, rather than using extrapolation. Figure 6 shows the geometry for a typical case where the interpolation scheme is required. The 3D shape, figure 6(a), illustrates three autosegmented slices that were identified as erroneous. The interpolated contours replace these poor-quality autocontours to give a more anatomically-reasonable overall profile. Figure 6(b) shows these contours and Figure 5. The relationship between the autosegmented rectal contour area (after subtracting the area of air pockets) and the Jaccard Conformity Index, JCI, comparing autosegmented and clinician-outlined rectal contours. Implausibly large areas correlate with scans where the JCI is low. This occurs mainly in the muscle-associated region in the lower third of the rectum, where the algorithm systematically over-estimates the rectal contour (prior to post-processing). The horizontal line shows the cut-off area of 2500 pixels (1420 mm 2 ) chosen for implausibly large contour areas. the corresponding clinician contour on an image for one of these slices. Training data A set of 26 training scans was used to develop the algorithm and identify optimal parameter settings. The effect of the contraction bias parameter on the accuracy of the autosegmentation algorithm is shown in figure 7. Mean values for JCI were 0.680, 0.688 and 0.684 for bias values of −0.9, −1.0 and −1.1, respectively. Based on these results, the optimal parameter was taken as −1.0. Similar analyses were used to determine other key values including the smoothing parameter (optimal value found to be 6). Improvements to the algorithm were also implemented based on observations of poor performance in challenging scans. Figure 7 demonstrates the distribution in the accuracy of the contours, with relatively few 'poor' contours with JCI below 0.5. Test results The algorithm optimised on training data was run on 30 test scans (as discussed in section 2.1). Performance of the algorithm was evaluated by calculating JCI scores for autosegmented contours compared to the gold standard. Dice Similarity Coefficient (DSC) (Dice 1945) scores were also calculated to allow comparison with studies in the literature. JCI and DSC scores for propagated planning contours were also calculated. Figure 8 shows JCI results as a function of slice position relative to the prostate (including training data JCIs for reference, with standard error bars). Slices further from the prostate with fewer than five IG scans were excluded due to being subject to large errors when calculating the mean. The mean JCI scores across all slices from the autosegmentation algorithm were 0.69 and 0.67 for the training and test data, respectively. This is an improvement upon the mean JCI scores from the corresponding propagated planning contours of 0.58 and 0.54 for training and test scans, respectively. The mean DSC for the test data across all slices were 0.78 and 0.69 for the autosegmentation and propagated planning contours, respectively (figure 9). Standard errors are indicated on respective plots. Conformity improves with increasing slice distance from the inferior muscle-associated regions. Training and test data have comparable accuracy. Figure 10 summarises these conformity index results (both JCI and DSC) for autosegmentation of the test data as compared with simple propagation of planning scan contours. Figures 11 and 12 give a further breakdown of the underlying processes informing the contours from the test set. Figure 11 shows the probability associated with each method used to estimate the final contour. The large majority of the slices use the Chan-Vese algorithm to estimate the contour. The kV planning scan is chosen as the best estimate for a significant number of cases in muscle-associated regions, where poor contrast dominates. Air-correction also plays a role in determining the final contour in a significant number of cases. The smoothing/interpolation aspect of the algorithm is used less frequently. Although this has a relatively small impact on the accuracy of the results, this process ensures that the resulting 3D shape is smooth and hence anatomically reasonable. Figure 12 shows the mean accuracy associated with the different autosegmentation methods. Where the Chan-Vese algorithm is used (either on the first or second iteration), the JCI values exceed 0.7. Contours for slices with large air regions perform similarly. By detecting values where slice contours do not fit the smoothed 3D shape, and replacing them with an interpolated contour, an improved estimate of the best contour is achieved. Without this step many of the 3D structures would be much poorer; this step acts as effective 'disaster mitigation'. The worst cases are those in the poor-contrast muscle-associated region of the rectum, where the autocontouring is not effective and the kV contours are used. In the relatively few cases where there is air in the kV planning scan but not in the daily IG scan, the simplified approach of reducing the area of kV contours by the amount of air does not produce accurate results. A more sophisticated approach, for example using an anatomically-based deformation model, may improve these cases. Discussion An autosegmentation algorithm was developed to identify rectal contours on daily MVCT scans for patients undergoing prostate IGRT. This novel approach involves primary segmentation rather than DIR, as DIR can struggle when dealing with large magnitudes of rectal deformation and varying intensities of rectal contents from day to day. The method uses a modified 2D Chan-Vese algorithm (Chan and Vese 2001), with HU/intensity scaling and additional self-checks. Slices affected by poor contrast, particularly the lower rectal third and surrounding musculature, are detected automatically and replaced by propagating the corresponding kV planning contour as a best estimate. Post-processing identifies erroneous contours and regenerates reasonable estimates via 3D interpolation. The algorithm is a crucial component, integrated within a wider automated processing system, in the calculation of delivered dose to the rectum within the VoxTox research programme The autosegmentation algorithm was optimised for identifying the rectal contour on MVCT imaging using a training set of 26 scans from 10 patients. Specific parameters such as HU scaling, identification of air, and selection of image analysis parameters were optimised by trial and error, or based on observation, so may not represent a 'global minimum'. However, we expect that the algorithm could be adapted for other imaging modalities, and even further anatomical sites. Validation was performed on 30 test scans. Performance of the autosegmentation test set with respect to the gold standard produced a mean DSC of 0.78 (SE<0.01). This compared favourably with studies in the literature that used higher quality imaging (dis- . When comparing these results, it should be noted that Geraghty et al used contours from multiple observers, which may result in a pessimistic value of DSC compared with results based on a single observer. Our experience suggests that in regions of poor image contrast, such as the lower rectal third, the autosegmentation algorithm could be complemented through the use of DIR. Use of a fully 3D algorithm, or a machine learning approach, may improve the accuracy of rectal autosegmentation. By implementing a 3D interpolation, not only has it been possible to automatically detect erroneous contours, but also the resulting estimated shape is relatively smooth and hence more anatomically representative. Future work will explore using the autosegmented 2D rectal contours as input to a 3D finite element model, allowing biomechanical expansion and voxel-by-voxel tracking, for improved accuracy of delivered dose calculation. The autosegmentation algorithm has been successfully implemented to accumulate delivered dose, accounting for interfraction motion, based on daily MVCT imaging (Scaife et al 2015). It has been a vital tool in testing the hypothesis that delivered dose can be a better predictor of rectal toxicity than planned dose within the VoxTox research programme (Shelley et al 2017). This novel approach for autosegmentation of IG scans may contribute to future advances in ART.
2019-03-18T14:04:20.559Z
2019-01-10T00:00:00.000
{ "year": 2019, "sha1": "893c4408d50e385e897917f456722c8a40535206", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2057-1976/aaf1db", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "62e22d5334feccd8f2c1cb8bd9cc0100524538be", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
243838786
pes2o/s2orc
v3-fos-license
Factor Associated With Teacher Satisfaction and Online Teaching Effectiveness Under Adversity Situations: A Case of Vietnamese Teachers During COVID-19 How teachers perform and react to the world-wide pandemic and how the epidemic affects an education system may also be used as new conditions to consider the way to enhance SDG4 in developing countries. Regarding that concern, this study investigated 294 teachers’ perspective on their teaching effectiveness and satisfaction during COVID-19. The findings underlined the significant roles of support from various stakeholders, school readiness toward digital transformation, and teachers’ anxiety over teacher satisfaction. Notably, teachers’ newly absorbed technological and pedagogical skills do elevate their teaching effectiveness but do not lead to higher satisfaction during the pandemic. There is no boundary that can stop the impact of COVID-19 across countries and industries. As a country which started dealing with the COVID-19 very soon, Vietnam has applied a school closure policy for schools all over the nation since February 4, 2020. The Vietnamese education system has to face a trilemma of ensuring the safety, learning progress, and proper living standard for teachers. On the one hand, the threats of SAR-COV-2 virus over students and teachers were limited. On the other hand, there are massive changes in teaching and learning habits as online learning was not a popular solution in the country (T. . In addition, more than one million Vietnamese teachers have to upgrade themselves to master new technologies while the concerns about the future always exist in their mind. Expanded from Wuhan, China, since early January 2020, the COVID-19 has been labeled with different levels of risk by various governments (Callaway, 2020). Some governments decided to adopt a social distancing policy in early February (Brahma et al., 2020), while other cabinets did not tighten their preventive regulations until early April (University of Oxford, 2020). Empirical evidence from prior coronavirus epidemics reported low levels of transmission in schools. However, by March 18, 2020, the school closure policy has been applied in 107 countries (Viner et al., 2020). Till early May, more than 1,268 million students, which are about 72% of total learners across 177 countries and territories, were affected by the COVID-19 (UNESCO, 2020). To minimize the negative impact of school closure, universities and educational institutions over the globe established different platforms, resources, and guidelines to fulfill the gap of teacher competency (Cambridge University, 2020; VPAL, 2020) and elevate learning and teaching practices (MHA, 2020). Notwithstanding, the sudden digital transformation still did not adapt to the teaching and learning demand, as well as caused new educational inequality (Hodges et al., 2020). Such unforeseen changes lead to adverse effects over students, parents, and teachers (Hoang, 2020;Hoang et al., 2020;, not to mention the other damages to economics and society (Bin Nafisah et al., 2018;Rashid et al., 2015). As a very close neighboring country with 1.435 km of shared land borders with China, Vietnam alerted the high risk of COVID-19 since very early (La et al., 2020). Besides the debate on the effectiveness of national school closure policy (T. Tran, A.-D. Hoang, Y. C. Nguyen, L.-C. Nguyen et al., 2020), there are also arguments on tuition that parents have to pay during school closure . While teachers in public schools were not affected by the tuition contest, teachers in private schools struggled, especially kindergarten teachers, whose grade levels are not appropriate for online classes. Specifically, among nearly 42,000 teachers had postponed working contracts and no salary, 29,700 of those were kindergarten teachers (Thanh, 2020). On March 3, 2020, a group of more than 150 private education institutions proposed a letter to the Vietnamese government, asking for support on policy, regulations, and taxation. Accordingly, about 70% of private education institutions will go bankrupt in the next 3 months as their cash flows were disrupted (Nguyen, 2020). Tackling the cost and damage of COVID-19 is a crucial mission of scientist, especially researchers in developing countries like Vietnam (Vuong, 2018). Regardless of the sources and the level of issues, these difficulties seriously challenge teachers' motivation and commitment to the teaching profession (Canrinus et al., 2012). Thus, sustaining teachers' mental health during the pandemic is also very important. Teaching effectiveness and teacher satisfaction are the main concerns of educational leaders in most countries and territories (Mulford, 2003;OECD, 2005). However, there is a lack of studies on teacher satisfaction and teaching effectiveness under the circumstance of school closure and social distancing policies due to pandemic. Thus, besides responding to the call of researching to prevent and minimize the impact of COVID-19 (Elseviers, 2020), this study also extended literature on teacher satisfaction and effectiveness, with a focus on the chaotic situation of a global crisis. This empirical evidence can contribute to elevating the way teachers overcome future adversity situations. Concerning the influence of stress, perceived support, school readiness, and teachers' newly absorbed skills over their satisfaction and teaching effectiveness, this study focused on those following research questions: 1. How do teachers' perceptions on the impact of COVID-19 affect their satisfaction and online teaching effectiveness? 2. How do teachers' perceived support affect their satisfaction and online teaching effectiveness? 3. How does school readiness toward online learning affect teacher satisfactions and online teaching effectiveness? 4. How do teachers' newly absorbed knowledge and skills affect their satisfaction and online teaching effectiveness? Teacher Satisfaction Teacher satisfaction is crucial to the operational excellence of any educational institution. Satisfaction in the teaching profession is quite different from other occupations due to its nature mechanism (Jorde-Bloom, 1986), antecedents, and outcomes (Skaalvik & Skaalvik, 2011). As life-long learners, teachers always seek new opportunities to develop themselves and raise their standards continuously (Little, 1995). Thus, maintaining intrinsic and extrinsic motivations is an essential concern, regarding the need for sustainable education quality (Hoang et al., 2020b;Pearson & Moomaw, 2005). Scholars have noticed that there are two broad measures or aspects of peoples' career satisfaction instead of a classical concept of a simple continuum or single measure (Holdaway, 1978;Nias, 1981). On the one hand, teachers are most satisfied by matters intrinsic to the role of teaching. On the other hand, teachers are dissatisfied with extrinsic issues to the task of teaching and working, such as the broader domain of society, governments, and the employing body (Dinham & Scott, 1996). Recently, newly proposed clustering approaches for teacher job satisfaction follow the initial research of Hofstede (1980) on six dimensions of cultural values, which include an aspect of collective versus individual behavior (Klassen et al., 2010). In particular, different cultures also lead to various contributions to the notion of teacher job satisfaction. Teachers from collective cultures like East Asian countries often have higher job commitment due to higher eagerness to prolong to directorial settlement (Kirkman & Shapiro, 2001). Even for Chinese teachers who are working in Western countries, the teachers who tribute the long-established value of following leaders also experience pressure adversely compared to their countryman who pared down that traditional belief (Xie et al., 2008). Disregarding the identical and cultural aspects, Leithwood and Sun (2012) stated that leadership could influence both the collective behavior and individual behavior of teachers. Caprara et al. (2003) confirmed that teachers' performance does enrich teacher job satisfaction in both collectivistic and individualistic cultures. However, Muhammad Arifin (2015) made an important point that despite the significant influence of motivation over teacher job satisfaction, its impact on teacher efficiency still needs more investigation. Factors associated with teachers' satisfaction can be categorized by the origin of the problem (internal-caused or external-caused) (Thibodeaux, 2014) or the level of challenge (Klassen et al., 2010). In particular, teachers themselves encounter adversity situations that generate conflicts and stress (Cooper & Travers, 2012;Gold & Roth, 2013). Besides teacher's perceived unfairness and sadness, different stakeholders such as students, colleagues, school managers, school administrators (Sergiovanni, 1967), and media (Hargreaves & Fullan, 1998) can also influence teacher job satisfaction. Regarding the hierarchy of factors associated with teacher satisfaction, Dinham and Scott (1998) proposed an eightfactor model to capture the notion of teacher satisfaction over three domains which are as follows: core teaching activities, school-related factors, and society-related factors. Day et al. (2007) and Dörnyei and Ushioda (2013) both confirmed that indicators related to the core teaching profession as the most critical factor which maintain a high level of teacher satisfaction. In contrast, they also stated that it is not easy to make teachers happy with school leadership, working culture, organizational structure, decision-making process, or school reputation. Admit that challenges, any action aiming to enhance teacher satisfaction by tackling school-level issues, are often acknowledged by teachers (Leithwood & Sun, 2012;Nguni et al., 2006). Regardless of the sources and the level of issues, these difficulties seriously challenge teachers' motivation and commitment to the teaching profession (Canrinus et al., 2012). Accordingly, we suggested four hypotheses to examine the influence of teacher perceptions, the support they received, the readiness of their school toward online learning during the pandemic, and their newly absorbed competencies, over teacher satisfaction during COVID-19. Hypothesis 1a: Stress feeling of COVID 19 has a negative impact on teacher satisfaction. Hypothesis 2a: Teacher perceived supports have a positive impact on teacher satisfaction. Hypothesis 3a: School readiness toward online learning has a positive impact on teacher satisfaction. Hypothesis 4a: Newly absorbed knowledge and skills during the pandemic period have a positive impact on teacher satisfaction. Online Teaching Effectiveness Teaching Effectiveness. Teaching is one of the most stressful occupations (Johnson et al., 2005), in which teachers have to maintain high levels of performance regardless of their condition (Chaplain, 1995). Both the early-stage teachers to experienced teachers, from under-graduated students to professors, have to face the stressful nature of teaching (Chaplain, 2008;Kyriacou, 1987). There are three factors of performance, including task performance, citizenship behavior, and counterproductive behavior (Colquitt et al., 2011). Teacher performance, therefore, is the demonstration of their impact on students learning and can be measured through student achievement, pedagogical practice observation, school, or student survey results (Lieberman & Miller, 1984). Teacher effectiveness is the aggregated impact of teacher behaviors on student learning (Chi et al., 2013;Seidel & Shavelson, 2007). Marsh and Bailey (1991) proposed the multi-dimensional approach to measure teacher effectiveness, including learning value, teaching enthusiasm, clear expression, group interaction, the harmonious relationship between teachers and students, tr context, evaluation methods, extracurricular assignments, and learning difficulty. Notwithstanding, teacher effectiveness influences student achievement the most, in comparison with other determinants like class size, in-class composition, or previous student achievement ( Darling-Hammond & Youngs, 2002;Staiger & Rockoff, 2010). Teachers' psychological characteristics have long been hypothesized to contribute to teaching effectiveness (Barr, 1952). Therefore, the state of well-being is necessary for effective teaching performance. Calabrese (1987) featured that teachers with higher teaching effectiveness and engagement are the ones who have a lower level of stress. Overall, factors related to the organizational culture, such as working pressure, learning cultural, and interpersonal issues with students or colleagues, can lead to a higher level of teacher stress (Antoniou et al., 2006). For instance, the most common sources of stress for almost teachers are student behavior and workload (Boyle et al., 1995). Regarding the internal factors that affect teacher performance, Maddux (1995) stated that teacher self-efficacy is the most crucial toward teacher performance. In particular, teachers' personality, perception, emotion, and cognitive determine the way they develop their learning capabilities, as well as social interaction toward better performance. In some specific circumstances, physical and mental support is needed to enhance the educational process. Supportiveness is not just to provide help but also includes interactive behaviors such as offering comfort and exchanging material resources, knowledge, and information (Sarros & Sarros, 1992). A considerable number of studies in social support and wellbeing have verified that social support can relieve an entity of pressure, maintain mental health, and increase prosperity at work (Holt-Lunstad et al., 2010;Toker, 2011). Social support is significantly related to well-being and a significant factor for predicting well-being and teaching effectiveness. Hsu and Tsai (2013) mentioned that suitable and proper social support from supervisors, peers, and families could enhance teaching effectiveness. ICT Competency. Regarding the critical role of Information and Communication Technologies (ICT) in the global digitalization context (Vitanova et al., 2015), both teaching and learning effectiveness could be improved significantly. In almost all countries of the Asia Pacific region, teacher ICT enhancement programs are accessible for teachers across educational types and levels (Sahito & Vaisanen, 2017). Toward sustainable education, several European countries recommended the utilization of ICT embedment within future teacher development plans (Usun, 2009). The digital transformation process is beyond the single purpose of using ICT applications as tools but extended to the renovation of teacher roles and pedagogies toward a constructive learning environment (Valcke et al., 2007). As a result, there are also changes in the requirements, structures, and approaches of teacher continuous professional development programs (Riviou & Sotiriou, 2017). Several factors determine teachers' ICT competencies, such as teachers' attitudes toward ICT, ICT experience and skills, self-efficacy, and perceived usefulness of ICT (Hernández-Ramos et al., 2014). The individual accountability, evaluation, career development, and formal qualifications of the teachers also play a vital role in enhancing the ICT skills that can create and maintain the link between ICT, the curriculum, teachers' needs, and professional development within a planned policies framework (Valcke et al., 2007). In short, literature has pointed out that teachers' attitudes toward the new technologies, their perceived framework guidelines, self-efficacy, and experience are crucial determinants of the ICT integration in the teaching-learning process (Valcke et al., 2007). Therefore, the overall effectiveness and efficiency of the educational activities are improved. Considering the novel impact of the COVID-19 on education, there is a necessity of understanding factors that affect teaching effectiveness, especially under the pressure of a sudden digital transformation due to school suspension. Correspondingly, this research measures the impacts of teachers' perspectives during COVID-19 and teachers' ICT competencies on teaching effectiveness through the following hypotheses: Hypothesis 1b: Stress feeling of COVID-19 has a negative impact on online teaching effectiveness. Hypothesis 2b: Teacher perceived supports have a positive impact on online teaching effectiveness. Hypothesis 3b: School readiness toward online learning has a positive impact on online teaching effectiveness. Hypothesis 4b: Newly absorbed knowledge and skills during the pandemic period have a positive impact on online teaching effectiveness. Conceptual Framework Consider teachers as the main subject of this study, this research investigated how teachers' satisfaction and online teaching effectiveness during COVID-19 varied due to diversity in their perceptions about the pandemic's impact. In particular, we examined teacher satisfaction (SAT) and online teaching effectiveness (ONL_EFF) during COVID-19 as the primary outcomes. Indistinct, SAT included teachers' satisfaction on the supportiveness they received toward (i) their daily living (Satis_life) and (ii) online teaching and learning activity (Satis_teach_learn). Online teaching effectiveness was constructed by (i) teachers' perceived teaching effectiveness (Onl_effective) and (ii) student activeness and engagement (Onl_active). Referred from the literature review, we constructed the antecedents as follows. First, teachers' perceived perspective regarding COVID-19 (FEEL) included teachers' feeling that there are significant changes in their living habits and financial status. The second construct is the support (SUP), which teachers receive from the Government, teacher union, and the parent association. Third, teacher readiness for online teaching (READY) is an accumulation of preparation in ICT infrastructure (Ready_ICT), school policies (Ready_policy), and teacher competency (Ready_ teacher). The final construct is a function of the newly acquired technological and pedagogical knowledge (NEW). All of those above items are teachers' self-report and were measured by a Likert scale of five (1= totally disagree, 2 = disagree, 3 = neither disagree nor agree, 4 = agree, and 5 = totally agree). Figure 1 demonstrates the overall structural equation model (SEM) to examine the relationship of four factors on teacher satisfaction and online teaching effectiveness under the adverse situation. Data Collection The research protocol has been approved by EdLab Asia Educational Research and Development Centre's IRB (No.200402). Researchers collected these data from April 6, 2020 to April 11, 2020, 2 months since the first date of school closure due to COVID-19 in Vietnam. Initially, a preliminary test was conducted with the involvement of thirty K-12 teachers and four principals in Hanoi to validate the measurements. After revising the questionnaire, we spread the survey URL across the Facebook groups of Microsoft Innovative Educator and Vietnam Innovative Education Forum-the most popular online community of Vietnamese teachers, with 38,600 members and 14,000 members, respectively. Within 1 week, there were 1005 clicks on our survey, which led to 373 potential observations. However, the final dataset includes 294 observations only, due to the exclusion of 79 respondents which violated our cross-checking questions. The final dataset has been stored in Harvard Dataverse (Hoang et al., 2020c), and the full descriptive statistic can also be found in Data in Brief (Vu et al., 2020). Results and Discussion Descriptive Statistic Table 1 presents the demographic of 294 respondents. The majority of respondents are female (83.3%), come from public schools (65.0%), and teachers mostly have bachelor's degrees (61.6%). Regarding teaching experience, nearly half of the teachers (41.8%) have more than ten years of teaching experience. Regarding teaching subjects, the distribution is quite equal, namely, 29.6% in sciences-related, 23.8% in social sciences-related, 19.4% in foreign languages, and 27.2% in other subjects. Many teachers are teaching in primary school (34.0%), while lower, upper, and post-secondary school teachers account for 21.4%, 22.4%, and 19.0%, respectively. Measurement Model Measurement Validation. We used SPSS 20 to conduct confirmatory factor analysis (CFA) and AMOS to run structural equation modeling (SEM). Initially, we assessed to find out whether our model has acceptable goodness of fit, as suggested by a two-step approach (Anderson & Gerbing, 1988). The results for the goodness of fit are presented in Table 2. As can be seen from the table, the Chi-square of the model is 73.596, the degree of freedom is 55, and the adjusted Chi-square over the degree of freedom is 1.338 (smaller than 3) (Mantel, 1963). Moreover, the goodness of fit (GFI) is 0.966 (>0.95), and the adjusted goodness of fit (AGFI) is 0.935 (>0.90) which indicates a well-fitting model (Hooper et al., 2008). Finally, the model reports normed-fit index (NFI) of 0.957 (>0.95), root-mean-square error (RMSEA) of 0.034 (<0.08), and comparative fit index (CFI) of 0.988 (>0.95) (Hooper et al., 2008). All in all, all indices suggest a good fitting model. Table 3 demonstrates the outputs of factor loading for CFA at p < 0.001. With 294 observations in our study, a factor loading value of 0.50 is required for each item (Hair et al., 1998). From Table 3, all elements in their related constructs have high enough factor loading values. As a result, four factors (perceived support, stress feeling, school readiness, and new knowledge absorption), as well as teacher satisfaction and online teaching effectiveness, are measured by their reliable indicators. Finally, Table 4 shows the results of the convergent and discriminant validity test. All of the measurement constructs have composite reliability (CR) bigger or equal to 0.7, average variance extracted (AVE) bigger than 0.5 (Byrne, 2013), and maximum shared variance (MSV) smaller than AVE (Fornell & Larcker, 1981). The maximum reliability (MaxR(H)) was also examined. Subsequently, the model's discriminant validity is constructed (Hancock & Mueller, 2001). Besides, outputs of factor correlations confirm the negative influence of teacher satisfaction, online teaching effectiveness, and teachers' feeling. Structural Model. Table 5 illustrates the outputs for SEM. As can be seen from the coefficients, the perceived support and school readiness have positive relationships with teacher satisfaction. In particular, the more support teachers received from both inside and outside schools, the more appreciation they have. Among various supports, the support from the government is the most crucial to teachers, with the estimated correlation of 0.79; support from parents association weights 0.54 only (the detailed interactions between constructs and their related factors can be seen in Supplementary Appendix 1). Moreover, if the schools are more ready for the transformation (online learning), the teachers would feel more satisfied. The readiness related to policies and teachers' capabilities contributed most to general school readiness according to teachers' views (0.77 and 0.74, respectively), while ICT infrastructure accounted for a smaller part (0.59). However, these factors (perceived support and school readiness) have no impact on teachers' perception of online teaching effectiveness. In addition, stress feeling in teachers has a negative relationship with both teachers' satisfaction and teachers' perception of online learning, and the two influences are quite similar in weight (À0.32 and À0.33). In other words, when teachers feel they have to change daily habits or their financial plan is threatened due to COVID-19, they tend to have lower satisfaction, and they consider online teaching is less effective. Between the change in daily habit and financial plan, teachers are affected more by daily habit changes, with the estimated correlation of 0.73 compared to 0.34 of the financial threat. The final remarkable relationship is between newly absorbed knowledge and skills and teachers' perception of online learning. This positive relationship shows that the more teachers acquire new pedagogy and ICT knowledge and skills, the more students' engagement and online effectiveness they perceive. For the respondents, new ICT knowledge and expertise is slightly more critical than pedagogy-related knowledge and skills (a difference of 0.14). Additionally, teachers' perception of online learning focuses a lot on students' active engagement (0.91) compared to the effectiveness of teaching and learning (0.49). Conclusion Teacher satisfaction is a crucial element toward teaching commitment and teaching effectiveness, which contribute to sustainable education development. The main goal of this study is to explore teachers' satisfaction and teaching effectiveness during the sudden digital transformation of teaching and learning due to COVID-19. On the one hand, this study contributes to broadening the literature on educational operation during crises. On the other hand, it highlights essential aspects, which educational leaders can consider enhancing the stable efficiency of teaching and learning activities during such a chaotic situation of COVID-19. In short, the research team figured out the significant influences of teacher's perceived support, stress, and readiness over teacher satisfaction. To promote online teaching effectiveness, school leaders should pay attention to teachers' stress and anxiety, as well as enhancing online teaching competencies. First, the final dataset of 294 observations has built up a shred of convincing evidence to confirm the influence of stress over teachers' satisfaction and online teaching effectiveness. In particular, sudden changes in daily routine and teaching habits due to school closure, as well as the anxiety regarding the current and potential decrease in regular income, had significant negative impacts on both the teachers' satisfaction and the online teaching effectiveness. This result is associated with what was explored long ago that variation in psychology may lead to the alteration of teaching effectiveness (Barr, 1952) and that teachers are dissatisfied with such extrinsic factors as contextual or societal effects as declared in previous scholars (Day, 2013;Dörnyei & Ushioda, 2013). Second, the perceived support from students' parents, from the unions or relevant authorities and the readiness of the schools, which influenced the most on the teachers' satisfaction, made no impact on the online teaching effectiveness. By this finding, the physical and mental extrinsic support seems to induce adverse effects on teachers during the COVID-19 pandemic compared to their regular teaching lifetime. Whereas under the usual educational conditions, the significant dissatisfiers with teachers are noted to be parents, policymakers, business leaders, or politicians (Hargreaves & Fullan, 1998); they play vital roles in raising the teachers' satisfaction in adversity. Third, among tested factors, the newly absorbed knowledge or skills was the only mediator which can help to enhance the online teaching effectiveness. However, incorporating new pedagogical and technological skills did not improve teachers' satisfaction during the pandemic. This finding is a notable contribution to current works of literature on teacher continuous professional development, in which learning new knowledge and skills is a critical component of teacher selfautonomy toward life-long learning and long-term career satisfaction (Little, 1995;Pearson & Moomaw, 2005). Inclosing this article, we would like to mention considerable limitations of the work, which can be fulfilled by future studies. First, with the focus on online teaching effectiveness, this research did not include the sample of teachers in mountainous and island areas, whose ICT infrastructure and Internet access are limited. Second, the findings of this research mostly relied on teachers' selfreport within a small group of teachers. Thus, future open database should be constructed to increase the diversity of this research topic (Vuong, 2020). As a result, a part of the data might be exaggerated as a consequence of teachers' stress due to COVID-19, while the other part might be flattened. Last but not least, the situation of COVID-19 was well handled by the Vietnamese government, so the stress of Vietnamese teachers might not be as acute as their colleagues in other countries, especially the territories with high levels of COVID-19 spreading. Thus, a comparative study will be beneficial to portrait the picture of teacher satisfaction during a pandemic. Author Contributions Phuong- Tam Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Supplemental Material Supplemental material for this article is available online.
2021-11-08T16:03:15.093Z
2021-11-06T00:00:00.000
{ "year": 2021, "sha1": "c39e63629ebecc5f898b8083df29d023d0ed820b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/00220574211039483", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "aff8ce0b1d002de376354501556a299884fb3c6d", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
59605566
pes2o/s2orc
v3-fos-license
Electroactive Bacteria Associated With Stainless Steel Ennoblement in Seawater Microorganisms can increase the open-circuit potential of stainless steel immersed in seawater of several hundred millivolts in a phenomenon called ennoblement. It raises the chance of corrosion as the open-circuit potential may go over the pitting corrosion potential. Despite the large impact of the ennoblement, no unifying mechanisms have been described as responsible for the phenomenon. Here we show that the strict electrotroph bacterium “Candidatus Tenderia electrophaga” is detected as an ennoblement biomarker and is only present at temperatures at which we observe ennoblement. This bacterium was previously enriched in biocathode systems. Our results suggest that “Candidatus Tenderia electrophaga,” and its previously described extracellular electron transfer metabolism coupled to oxygen reduction activity, could play a central role in modulating stainless steel open-circuit potential and consequently mediating ennoblement. INTRODUCTION When immersed in oxic seawater, metals and alloys can form an electrochemical cell with metal oxidation as an anode reaction and oxygen reduction at the cathode. In the case of stainless steel, metal (Cr, Ni, Mo,. . .) oxides form a so called passive layer largely preventing electron flow between these two electrodes. As a consequence, stainless steel exhibit a measurable electrochemical potential between these two half cells, called Open circuit potential (OCP) since no current is drawn from the system. Stainless steel OCP results from the concentration of the reactants, formal half-cell reaction potential (E • ') and the kinetic parameters associated with each half-cell reaction. It can be measured in situ using a reference electrode of a known potential. Stainless steel ennoblement is a well-known phenomenon corresponding to an increase of the OCP, typically by 400-500 mV, when these alloys are immersed in seawater (Mollica and Trevis, 1976). As the OCP value gets closer to the pitting corrosion potential, the probability of stainless steel pitting and crevice corrosion initiation increases, hence the central problem raised by ennoblement (Mollica, 1992;Zhang and Dexter, 1995). Ennoblement is a biotic process as it is dependent on microbial colonization and development on the stainless steel surface (Motoda et al., 1990;Scotto and Lai, 1998;Le Bozec et al., 2001;Wei et al., 2005;Gümpel et al., 2006). Stainless steels are commonly used building material in seawater systems and despite the large industrial impact of the ennoblement, relatively little is known about the diversity and the actual activity of marine microorganisms that colonize stainless steel. There are various hypotheses regarding their possible contributions to ennoblement, as summarized in Little et al. (2008). There are three ways of increasing the OCP: (1) thermodynamics, (2) kinetics and (3) alteration of the nature of the reduction reaction. A decrease of the surface pH could thermodynamically increase the observed OCP. However, as shown by Dexter and Chandrasekaran (2000), the pH changes in a heterogeneous biofilm are highly variable and also challenging to measure. Kinetically, an increase of the cathodic reaction rate can also result in an increase of the OCP. Previous works have demonstrated that surface microorganisms increase cathodic reduction efficiency (Johnsen and Bardal, 1985;Holthe et al., 1989;Audouard et al., 1994;Zhang and Dexter, 1995;Mollica and Scotto, 1996;Rogne and Steinsmo, 1996;Le Bozec et al., 2001;Larché et al., 2011;Thierry et al., 2015). Other reactions such as manganese oxide reduction in freshwater (Dickinson and Lewandowski, 1996;Gümpel et al., 2006) and the formation of hydrogen peroxide could also contribute to the increase of the OCP as it is a stronger oxidant than oxygen with a higher redox potential (Landoulsi et al., 2008b). While the mechanisms of ennoblement are still discussed, the seawater temperature has been identified as a critical parameter. Ennoblement is a temperature dependent process, undergoing a complete inhibition above a critical temperature around 40 • C in temperate seawater and freshwater (Scotto et al., 1986;Dupont et al., 1997;Martin et al., 2003;Gümpel et al., 2006;Thierry et al., 2015). This critical temperature seems to vary with geography as it is 32 • C in the Norwegian and Baltic sea (Bardal et al., 1993;Mattila et al., 2000). Despite sustained research on this important potential modulation, no mechanisms have been described that can explain the potential ennoblement, nor could the primary source of electrons for these cathodic reactions be identified. Some microorganisms can perform direct extracellular electron transfer to and from electrodes. This was demonstrated in controlled systems such as microbial fuel cells. A wide diversity of microorganisms can channel electrons resulting from soluble substrate oxidation toward an anode, thus creating a measurable current in these bioelectrochemical systems (Philips et al., 2015). Far fewer microorganisms were identified as cathodic electron acceptor (Rabaey et al., 2008). Electroactive microorganisms such as Geobacter sulfurreducens or Shewanella oneidensis have been used as models to understand the electrogenic metabolism or the electron pathway from a soluble electron donor to an anode. However, they were also described as electrotroph, performing the reverse reaction using an electrode as electron donor. Shewanella oneidensis is able to switch from an electrogenic metabolism to an electrotrophic by reversing its electron transport pathway (Ross et al., 2011). Similarly, Geobacter species are also able to act as electrotrophs to reduce fumarate to succinate or nitrate to nitrite with electrons provided from a cathode through a direct electron uptake mechanism (Gregory et al., 2004;Strycharz et al., 2011). Geobacter sulfurreducens can increase the open-circuit potential by several hundred mV on stainless steel under anoxic condition via an electrotrophic metabolism to reduce fumarate to succinate (Mehanna et al., 2009(Mehanna et al., , 2010. Despite the absence of a model electrotrophic bacterium cultivated under aerobic conditions, some pure cultures were able to catalyze the electrochemical reduction of oxygen (Rabaey et al., 2008;Erable et al., 2010), as well as some environmentally enriched communities (Strycharz-Glaven et al., 2013;Rothballer et al., 2015;Milner et al., 2016;Rimboud et al., 2017). Recently, the study of a bacterial community developed on a biocathode led to the identification of "Candidatus Tenderia electrophaga": a strict electroautotrophic bacterium able to use a cathode as an electron donor to reduce oxygen and able to fix carbon dioxide (Eddie et al., 2016). The presence of electrotrophic bacteria under natural conditions with aerated seawater has not yet been proposed as a possible reason for potential ennoblement despite their apparent ability to change the OCP. As the rationale for this study, we hypothesize that electrotrophic bacteria could be involved in ennoblement by drawing electrons from immersed stainless steel in open-circuit condition (without additional current provided). However, to our knowledge, no study of stainless steel surface microbial community structure with high throughput sequencing methods has been carried out yet, even less so in relation to modulation of the electrochemical potential of this material. To test our hypothesis, we thus used the temperature dependence property of ennoblement and examine distinctive taxa in ennoblement vs. non ennoblement conditions. Experimental Set-Up We exposed all coupons in 300 L seawater tanks renewed at an approximated rate of 12 L/h with an incoming seawater from the bay of Brest (France) (48 • 21 32.1 N 4 • 33 07.4 W). We also used two natural seawater samples (5 L, n = 3) collected from a Bay of Brest coastal microbial observatory close to the tanks' seawater pump intake. The two samplings took place on the March 02, 2015 and March 26, 2015, before and after the coupon exposure. Seawater was pre-filtered on a 3 µm filter and bacteria collected on a 0.22 µm sterivex filter. OCP Measurements and Cathodic Polarization Curves Stainless steel samples were held by a titanium wire to measure the open-circuit potential with an Ag/AgCl reference electrode. The electrodes were calibrated with saturated calomel electrode (SCE) REF421 (Radiometer, France). The use of titanium wires has been documented in previous works and inhibits galvanic corrosion at the point of contact with stainless steel (Espelid, 2003). Measurements of OCP and temperature were recorded every 30 min. Five replicates were used per condition. The cathodic polarization curves were drawn on samples exposed for two weeks under the two OCP conditions of interest: with ennoblement at 36 • C and without the shift of potential at 40 • C. We used a Gamry Reference 600 (Gamry Instruments, United States) from 20 mV over the open-circuit potential to −1.2 V vs. Ag/AgCl electrode with a scan rate of 0.167 mV/s. The dynamic polarization curves started at +20 mV in order to get first points of the anodic branch without perturbing the oxide layer before cathodic scan. All potential values were corrected based on the reference electrode calibration with SCE. We obtained an estimation of the passivation current by drawing the intersection of the tangent of the anodic and cathodic branches close to OCP value. SEM Imaging Dedicated coupons (20 mm × 20 mm × 1.5 mm) were fixed for scanning electronic microscopy (SEM) with 2.5% glutaraldehyde seawater for 1 h, then rinsed three times in seawater for 15 min. The dehydration process involved four washes of 15 min in increasing concentrations of ethanol (50, 70, 90, and 100%) followed by similar washes in hexamethyldisilazane (HMDS) and ethanol solution ( 1 /3 HMDS, 1 /2 HMDS, 2 /3 HMDS, and 100% HMDS). We observed surface communities with SEM on samples exposed in seawater at 36 and 40 • C using a Hitachi SU3500 machine (Hitachi High-Technologies, Germany). To perform cell counting, we imaged ten random areas for each condition at ×1000 magnification using backscattered-electron imaging. Pictures were processed with the ImageJ software for automatic cell detection. After the background removal, images were converted into binary black and white with the default threshold of the software. The particle analysis was used with a minimum area of 0.5 µm 2 up to 4 µm 2 for cell detection (Supplementary Figure 3). Surface Cell Collection Surface cells were collected immediately after coupon collection using a sterile cell lifter (Thermo Fisher Scientific, United States) and by gentle and uniform scratching into 100 mL of a Tris Buffered Saline (TBS) solution (50 mM Tris, 150 mM NaCl, pH 7.6) under sterile conditions ensured by a Bunsen burner, keeping the immediate area sterile. TBS solutions were then stored in ice for transport to the molecular lab. TBS solutions were filtered through 0.22 µm GTTP polycarbonate membranes (Merck Millipore, United States) which were then transferred to PowerBiofilm R Bead Tubes from the PowerBiofilm DNA extraction kit (MoBio, United States). Control samples were collected under identical conditions and visualized by scanning electron microscopy to ensure removal of the cells attached to the surface. DNA Extraction and Sequencing The DNA extraction was performed according to the manufacturer's instructions of the PowerBiofilm DNA extraction kit (MoBio, United States). The V4-V5 region of the 16S rRNA gene was amplified with the 518F and 926R primers fused with Illumina adapters and sample-specific sets of barcodes and indexes (Nelson et al., 2014). PCR products were visualized on agarose gels and purified with AMPure XP (Agencourt, United States) reagent. DNA concentration was assessed with Quant-iT TM PicoGreen R dsDNA (Invitrogen, United States) prior to pooling the PCR products at equimolar concentration. Sequencing using Illumina MiSeq platform was performed at the Josephine Bay Paul Center (Woods Hole, MA, United States). Sequences were deposited to the European Nucleotide Archive under the accession number PRJEB27599 1 . Bioinformatics Analysis The quality filtering was done following Minoche et al. (2011), recommendations, before merging of paired-end reads with Illumina-Utils python scripts on demultiplexed raw reads (Eren et al., 2013). OTU delineation was performed with the Swarm algorithm using the default local linking threshold d = 1 (Mahé et al., 2015). The chimera detection and removal were carried out with VSEARCH (Rognes et al., 2016). The Silva NR 132 database (Quast et al., 2013) was used for taxonomic assignment of Swarm representative sequences with Mothur (Schloss, 2009). We used the Phyloseq R package to calculate alpha diversity indices and the vegan package to compute beta diversity (with Bray-Curtis indices) and non-metric multidimensional scaling (NMDS) ordination. Stacked bar plots were produced with ggplot2 (Wickham, 2009). We used the two conditions "with ennoblement" (30 • C to 38 • C) and "without ennoblement" (40 • C) to perform biomarker detection with LEfSe (Segata et al., 2011). Potential Ennoblement on Stainless Steel and Cathodic Polarization Curves At the beginning of the incubations, coupons were at − 272 mV ( ± 6 mV) vs. saturated calomel electrode and after 3 to 5 days of incubation, the OCP increased in all coupons except those incubated at 40 • C (Figure 1). The potential ennoblement was highly reproducible among replicates at temperatures from 30 • C to 38 • C with a mean increase of the electrochemical potential of 470 mV ( ± 12 mV) for all samples within 4 days of exposure. In contrast, the open-circuit potential for samples immersed at 40 • C changed very little over time (+54 mV, ± 8 mV). In our FIGURE 1 | Open-Circuit Potential (OCP) vs. time for stainless steel coupons exposed to different temperature of seawater. Mean value and 95% confidence interval for 5 replicates per conditions, or 10 replicates for 30 • C as the two sequential series were pooled together. SCE, saturated calomel electrode. setup, there was thus a critical temperature between 38 • C and 40 • C under which ennoblement was observed for all samples with similar maximum electrochemical potential values after day 4 (or day 5 at 38 • C), whereas ennoblement did not occur above that temperature. In a subsequent incubation under similar conditions, we carried out a cathodic polarization curve on samples exposed at 36 • C and 40 • C and we observed a shift of the polarization curve after two weeks of exposure at 36 • C that was not observed at 40 • C (Figure 2). We estimated the passivation current to be around 0.01 µA/cm 2 for all conditions, based on these polarization curves. SEM Observations We used a similar incubation setup at 36 • C and 40 • C, allowing biomass colonization on immersed stainless steel coupons for observation of surface communities with SEM. We found an average cell density of 11,661 cells/mm 2 ( ± 773 cells/mm 2 ) at 36 • C, and a lower density of 7,219 cells/mm 2 ( ± 442 cells/mm 2 ) at 40 • C. Under both conditions, we observed bacilli and coccobacilli, as well as some very long filamentous bacteria, but only at 36 • C (see Figure 3). Stainless Steel Bacterial Community We characterized surface bacterial communities for each condition using 16S rRNA amplicon sequencing. We sequenced 36 libraries and obtained 5,597,422 raw sequences of the bacterial 16S rRNA gene V4−V5 region. After quality filtering and paired end read merging, 3,653,018 sequences were retained and clustered into 166,164 operational taxonomic units (OTUs) using the Swarm algorithm with the default local linking threshold d = 1 (Mahé et al., 2015). Putative chimeras where removed with the VSEARCH software (Rognes et al., 2016), leading to a high-quality Bacterial 16S rRNA diversity in each sample was compared with a weighted dissimilarity index (Bray-Curtis) in an ordination analysis (Figure 4). Replicate samples collected at the same temperature were clustered and differed significantly from one temperature to another. Therefore, the different bacterial communities that develop at temperatures from 30 • C up to 38 • C appear able to increase the OCP. In addition, the two coupon Frontiers in Microbiology | www.frontiersin.org series incubated at 30 • C during a two weeks interval exhibited high community similarity, showing that surface community assembly was highly reproducible. We identified microorganisms that were distinctive of the "ennoblement" condition using a biomarker detection analysis with the LEfSe software (Segata et al., 2011). We chose to define the condition with the ennoblement (30, 33, 36, 38 • C) as opposed to a lack of OCP change (40 • C). We conserved biomarkers with a minimum linear discriminant analysis (LDA) score of 3 resulting in 47 OTUs that were differentially represented during ennoblement. Among these we found mainly Proteobacteria including members of the Oceanospirillales, Rhodobacterales, and Alteromonadales (Figure 5). An OTU affiliated to the genus Oleiphilus genus was remarkably found exclusively in exposures setups between 30 • C and 38 • C with respective mean relative abundance of 18.41, 2.67, 6.76, 12.40, and 0.03% at 40 • C ( Figure 5). However, other Oleiphilus OTUs were also present at 40 • C. More strikingly, we also detected the presence of a recently described Proteobacteria "Candidatus Tenderia electrophaga" as a very strong biomarker (Figure 5). Members of this candidate genus were found in high relative abundance from 30 • C to 38 • C with mean relative abundance of 1.54, 1.58, 6.64, and 10.05% with a peak abundance of 18.4% in a sample replicate at 38 • C (Figure 5). In addition, this bacterium was found exclusively in ennoblement conditions, as no other "Candidatus Tenderia electrophaga" OTUs were detected at 40 • C. Finally, we examined the abundance of these biomarker bacteria in the pool of colonizing bacteria from natural seawater collected in the vicinity of our set up pump intake, before and after incubation periods. The bacterial composition in seawater was strikingly different from that of the steel surfaces (Figure 4). No sequences of the best ten biomarker OTUs were recovered from seawater, except for two affiliated to an Oleiphilus OTU. DISCUSSION The open-circuit potential is defined by the concentration of the reactants, formal half-cell potential (E • ') and the kinetic parameters associated with each half-cell reaction. A change in the cathodic reaction has often been invoked as the only halfcell reaction changed by the presence of microorganisms on the surface of the stainless steel. Indeed, the bacterial community is known to increase cathodic reduction efficiency (Johnsen and Bardal, 1985;Holthe et al., 1989;Audouard et al., 1994;Zhang and Dexter, 1995;Mollica and Scotto, 1996;Rogne and Steinsmo, 1996;Le Bozec et al., 2001;Larché et al., 2011). In this study, we were interested in gaining further insight into the bacterial community of the stainless-steel surface immersed in seawater and its electrochemical activity in relation to the potential ennoblement. Previous studies have shown an inhibition of the ennoblement activity above a critical temperature (Scotto et al., 1986;Dupont et al., 1997;Martin et al., 2003;Gümpel et al., 2006;Thierry et al., 2015). In our setting, this critical temperature was between 38 • C and 40 • C, above which the ennoblement was inhibited despite the continuing presence of bacteria. We used that information to investigate the community composition between ennoblement at lower temperature vs. no ennoblement at higher temperature. A central result of this work is the identification of OTUs affiliated to "Candidatus Tenderia electrophaga" that were exclusively present under conditions leading to potential ennoblement, i.e., under 40 • C and considerably enriched compared to natural seawater. Other electroactive bacteria have been shown to be able to change the potential of electrode under anaerobic condition (Mehanna et al., 2009), and a microbial community that was able to do the same in aerobic conditions was described as an electroactive biofilm community (Rimboud et al., 2017). This study is correlative and cannot formally establish a mechanistic link between the detected biomarker and the ennoblement, but the distinctive presence of an electrotroph bacteria in aerobic condition is, to our knowledge, a novel observation and suggest a possible metabolism for potential ennoblement. "Candidatus Tenderia electrophaga" can indeed accept electrons from a conductive surface while using oxygen as a terminal electron acceptor (Eddie et al., 2016). This activity is based on its extracellular electron transport system composed of cytochrome c oxidase complexes coupled with the reduction of oxygen and the fixation of carbon dioxide using the Calvin-Benson-Bassham cycle, making it a chemo-electro-autotroph (Eddie et al., 2017). In that study, type IV pili genes were also proposed to play a role in the extracellular electron transport. In the original study set up that led to the description "Candidatus Tenderia electrophaga" on a biocathode, the biofilm development developed a current density between 0.92 and 4.28 µA/cm 2 at a fixed potential of +66 mV vs. SCE (Malanoski et al., 2018). Our experimental set up does not include biocathodes but rather open-circuit conditions, meaning that no current was provided nor drawn to the surface microbial communities. However, a possible source of electrons could be the passivation current produced by the stainless steel. This current is due to the slow oxidation of iron and chromium atoms in the passive layer of the stainless steel, forming a thin film containing iron and chromium hydroxides (Marcus, 2011). Given the polarization curves obtained at 36 and 40 • C we showed that our coupons' passivation current is on the order of magnitude of 0.01 µA/cm 2 . These values are of two orders of magnitude lower than those observed at microbial fuel cell biocathodes, but could potentially sustain the growth of electroactive bacteria under open-circuit conditions. Overall, our results suggest that ennoblement could be explained by the following mechanism: the stainless steel would act as an electron source for electrotrophic bacteria via its passivation current, using extracellular electron transport mechanism coupled to oxygen reduction. This hypothesis is based on results of a study using a metabarcoding approach that comes with its own limitations as the sequenced DNA represent a fragment of the 16S rRNA gene and not the complete genome. Also, the biomarker approach does not consider bacteria that could be present at all temperatures but with a different activity at 40 • C that would result in the absence of the potential ennoblement. These limitations would be overcome with the use of metagenomics, to assess the genetic potential of the bacterial communities, and metatranscriptomics to confirm if the actual genes expressed would support our hypothesis. Alternatively, a model for ennoblement based on local pH change at stainless steel surface has been proposed (Dexter and Chandrasekaran, 2000) and requires the formation of a thick biofilm acting as a strong diffusion barrier. Our SEM observations of bacterial colonization after ennoblement do not support this hypothesis, as after one week of exposure, the development of attached bacteria was at a very early stage and could be defined as sparse bacterial colonization rather than an actual biofilm. We did not observe a uniform threedimensional structure of extracellular polymeric substance with embedded bacteria covering the whole surface of the stainless steel. Another hypothesis invoked the contribution of hydrogen peroxide as a central electron acceptor during ennoblement (Scotto and Lai, 1998;Landoulsi et al., 2008b). H 2 O 2 release by heterotrophic bacteria as a byproduct of oxygen respiration, however, requires a high concentration of electron donor (20 mM of D-Glucose in Landoulsi et al., 2008a). This does not correspond to our environmental conditions and is thus unlikely a central explanation for ennoblement in natural seawater environments. Besides "Candidatus Tenderia electrophaga, " other OTUs were identified as biomarker, especially some bacteria able to use aliphatic hydrocarbons as energy and carbon source like Oleiphilus (Yakimov and Golyshin, 2014). These bacteria could originate from the seawater pipes that might be contaminated with a small amount of oil-derived components. This would favor hydrocarbon metabolism and therefore the development of these bacteria. We identified the genus Marinobacter, which includes oil degrading species, and which is also found in the cathodic enriched community where "Candidatus Tenderia electrophaga" was described by Eddie et al., 2017, andWang et al., 2015. The presence of oil degrading bacteria in potential ennoblement conditions is intriguing, but their role has yet to be defined. Gammaproteobacteria are often reported in oxygen reducing biocathode communities (Strycharz-Glaven et al., 2013;Rothballer et al., 2015;Milner et al., 2016). They were also found to be dominant in this study (Supplementary Figures 1, 2), and four of the top ten biomarkers are also affiliated to this Gammaproteobacteria (Oleiphilus, 'Candidatus Tenderia electrophaga' , Aliikangiella, Marinobacter). We found other biomarkers with poor taxonomical assignment, only to the order level, e.g., NRL2 (Alphaproteobacteria). Therefore, no hypotheses can be generated from the presence of these biomarkers. The risk associated with ennoblement is pitting corrosion as the potential increase reach values close to pitting potential. The use of high grade stainless steel and short exposure time limited the risk of pitting corrosion in this study. But future work could involve lower grade stainless steel to associate the bacterial communities to pitting corrosion CONCLUSION The rational for this study was the observation of a sharp temperature inhibition around 40 • C of stainless steel ennoblement in the temperate seawater of the bay of Brest. We used this property to identify bacteria potentially involved in ennoblement. The detection as a biomarker for ennoblement of "Candidatus Tenderia electrophaga, " a known electrotroph was a remarkable result. Based on recent literature on "Candidatus Tenderia electrophaga" activity, we proposed a new mechanism for ennoblement based on extracellular electron transfer with oxygen as a terminal electron acceptor. The electron donor for this reaction could be the passivation current resulting from slow surface stainless steel oxidation at the passivation layer interface. DATA AVAILABILITY STATEMENT Sequences were deposited to the European Nucleotide Archive under the accession number PRJEB27599 (http://www.ebi.ac.uk/ ena/data/view/PRJEB27599). AUTHOR CONTRIBUTIONS FT, NL, DT, and LM designed and conceived the experiments. FT performed the experiments and data analyses. HM did the sample sequencing. FT and LM drafted the manuscript. NL, DT, HM, and MJ contributed to data interpretation and assisted with writing of the manuscript.
2019-02-07T14:03:04.106Z
2019-02-07T00:00:00.000
{ "year": 2019, "sha1": "ade625cc212afa26c52aaf8582cbcca95e82da20", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.00170/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ade625cc212afa26c52aaf8582cbcca95e82da20", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
28281944
pes2o/s2orc
v3-fos-license
Perception of inpatients following remission of a manic episode in bipolar I disorder on a group-based Psychoeducation program: a qualitative study Background This forms the first study of a group-based psychoeducation program for inpatients following remission of a manic episode in patients suffering from bipolar I disorder in a Chinese population. The aim was to explore the patient’s perspectives of the program and their suggestions regarding ways to improve the intervention in the future. Methods Semi-structured and in-depth interviews were conducted with 15 participants who had participated in 8 sessions of a group psychoeducation program over 2 weeks. The verbatim transcripts of those interviews were analysed using thematic analysis. Results Five themes emerged from the data, including the patients’ perception of participating in the program, their perception of the setting, perception of participating in a group program, perception of the learning content and of the outcome of participating in the program. Conclusions The results presented here describe how the short-term group psychoeducation program was experienced by the patients. Recommendations are also offered to improve the setting, content, and delivery. Our findings provide evidence that the program is beneficial for manic patients with bipolar I disorder, and this intervention warrants further research especially in a Chinese population. If these benefits are confirmed in future studies, this program could be incorporated into routine psychiatric inpatient care in China. Background Bipolar disorder type I (BD-I) is the sixth leading cause of disability globally [1]. Manic episodes are a common feature of BD-I and the lifetime prevalence is about 1.5% in European countries [2]. The lifetime prevalence of BD-I in China is about 0.09% in the year of 2017 [3]. Acute manic episodes usually require emergency admission to a psychiatric hospital to facilitate rapid recovery [4]. A range of quantitative researches indicate that group-based psychoeducation interventions guide patients to greater awareness of relapse features [5], reduce time spent in bipolar episodes [6], raise treatment adherence levels and improve quality of life [7]. Poole, Smith and Simpson [8] conducted a qualitative study on the perspectives of bipolar outpatients' enrolled in a group-based psychoeducation program in the UK, and they found that psychoeducation had a positive impact on patients' medication adherence, social support, knowledge and acceptance of BD and access to the services. However, there are few psychoeducational studies have been conducted in psychiatric inpatient settings, inpatients are more severely impaired than outpatients [9] and most patients suffering from manic episodes require hospital admission. It is unclear whether psychoeducational programs may benefit psychiatric routine care in severely manic patients in the literature. Dropout rates are very high in the outpatient settings, perhaps due to the long duration of psychoeducation programs [10], hence the present study adjusted the length of treatment to examine whether a short duration psychoeducational program could reduce drop-out rates whilst resulting in similar benefits. Furthermore, research indicates a need to examine psychoeducation for BD as a tool to engage people from diverse ethnic backgrounds [8]. Chinese populations hold particularly stigmatized attitudes towards mental illness because of Confucian beliefs [11,12]. The diagnosis and treatment of mental disorder leads to marginalization due to perceived humiliation within the community [13]. Thus, many discharged patients no longer feel the need to see a doctor regularly and go off medication use as a consequence. Psychoeducation may be an important way to address the stigma and other barriers to mental health treatment amongst those inpatients [14]. There is also evidence from a systematic review suggesting that mental health interventions targeted to a specific cultural group are four times more effective than interventions that do not target such cultural differences. The same is true for interventions conducted in the patients' native language (if other than English), being twice as effective than those using English [15]. It is unclear whether culturally adapted group-based psychoeducational programs are acceptable and feasible for Chinese patients. As the present study is the first to conduct a group-based psychoeducation program for BD-I inpatients in China, a qualitative evaluation is the most appropriate way to understand unique cultural elements of engaging with the intervention [16]. Therefore, we aimed to explore the experiences of Chinese patients with bipolar disorder of being in a culturally adapted short group-based psychoeducation program and their perceptions of elements such as content, teaching, delivery methods, impact and suggestions for improvement of the program in China. Group-based intervention The program was delivered by the Beijing Anding Hospital affiliated Capital Medical University research team. All BD-I patients admitted to the inpatient unit and who received standard hospitalized treatment between September 2015 and April 2016 were approached for participation. Participants were referred to the program when their clinical symptoms were alleviated [as defined by a Young Mania Rating Scale (YMRS) score of less than 8] and during their discharge period. The program delivered eight sessions of 40-60 min duration in two weeks. Three psychiatrists and two clinical psychologists directed these sessions and the facilitators were trained by an academic expert in psychoeducation. The psychoeducation handbook was culturally adapted from Colom and Vieta [17], and the group sessions included the following topics: Session 1: introduction of BD disease knowledge, such as biological etiology, epidemiology and concepts; Session 2: definition of mania and hypomania, depression mixed state and psychotic symptoms; Session 3: biological rhythm and manic/depressive episode; Session 4: the role of pharmacological treatment and different types of medication; Session 5: medication adherence and monitoring, electroconvulsive therapy and psychotherapy; Session 6: stress management, problem coping strategies and interpersonal relationships; Session 7: recurrent signal, early detection of episodes and how to seek help; Session 8: review and assessment, how to establish a management plan and how to monitor daily mood. The psychoeducation materials in Chinese are available for request by email from the corresponding author. Sample Purposive sampling was used to select 13 participants who successfully completed the program and 2 participants who did not complete the program. Participants who had been discharged from the hospital were invited to take part in this interview process through a letter/email, which was sent along with the informed consent form. If the participants were willing to take part in the study, they had two weeks to contact the author by telephone and arrange a suitable time for the interview in the hospital. Resident participants were recruited from the inpatients ward directly after they completed the program. Data saturation was achieved at the 12nd participant. Three more interviews were conducted to examine the saturation [18], giving a total sample of 15 participants (Table 1). Each individual interview lasted 40 to 60 min and was conducted by the authors. All interviews were audio-recorded. Data collection This study aimed to understand underlying themes in patients' perspectives of the group-based psychoeducation program and any impact of the intervention. Hence, researchers used semi-structured interviews to obtain the data and used thematic analysis as this was the most appropriate method for qualitative analysis [19]. All interviews were conducted at the affective disorder inpatient unit. The interview questions were developed through literature review and discussion with qualitative experts. The expert panels include two experts in qualitative research and psychoeducation. The interview questions were revised after their review. Also, in order to compare the perception of Chinese patients with those of European patients, some of the questions were adapted from Poole, Smith and Simpson [8]. Some sample questions discussed in the interviews were as follows: How did you feel while participating in this program (initial question)? Would you recommend the program to others? How do you feel about the learning environment? How do you feel about the facilitators? What do you think of the learning content? How can this content be improved further? What is your experience of the group? What has been the benefit (s) of participating in the program? Analysis The transcripts were analyzed according to the thematic analysis method described by Braun and Clarke [19]. There are six phases of thematic analysis: familiarizing with data; generating the initial codes; searching themes from text; reviewing themes; defining and naming the themes; producing the report. To ensure that conformability was established, two independent authors separately reviewed the coding and themes, one author with a background in clinical psychology, and a second author with a background in psychiatry [19]. Related words or sentences representing an underlying idea were identified and coded into a category for the analysis of emerging sub-themes [20]. Also, these two authors held several meetings to compare their notes on themes and sub-themes in order to achieve consensus upon an accurate interpretation of the data. The credibility of the data was established through a check by another staff member. Ethics This study was approved by the Human Research and Ethics Committee of Beijing Anding Hospital, Capital Medical University. Informed consent, including permission to record the interviews, was obtained from the participants. The participants were also informed that they could withdraw from the study at any time without reason or penalty. Results Five themes emerged from the thematic analysis: (1) perception of participating in the program; (2) perception of the setting; (3) perception of being in a group; (4) perception of learning content; and (5) perception of the outcome of participating in program. Theme 1: Perception of participating in the program Sub-theme 1: Engagement For those participants who were willing to complete the program, the most common reason for doing so was their wish to understand more about their illness. "I have a better understanding of the illness, why not? It's cost-free" (Participant 10). Sub-theme 2: Recommending the program to others All participants would recommend the program to others. They commented that people would experience a range of benefits if they participated in this program. "I feel so lucky that I was selected to attend this program, I would definitely recommend the program to others I know suffering from a bipolar disorder." (Participant 6) Sub-theme 3: Suggestions for the program Some participants suggested that the research team should have made a comprehensive booklet for patients who wanted to learn further in their free time. "Can you print the handout for me before I am discharged from the hospital? I need it. I can review it when I am at home." (Participant 6) Furthermore, participants expressed interest in any follow-up studies, where they hoped to learn suggestions for dealing with their illness. Most participants reported that the classroom was relaxed, and that the light background music created a pleasant environment. "This ambiance with light music enhanced my mood and allowed me to focus on the class." (Participant 7) "A big table can fit all of the people (group members) and I felt it's a teamwork and our heart tights junctions." (Participant 10) Some participants suggested ways in which the setting could be improved, such as the need to create a quieter environment to avoid potential distractions and the need to improve lighting conditions to improve the readability of the powerpoint slides. "um….the light was too dim, I felt sleepy and a bit depressed....it was hard to concentrate on the class. What I suggested is a change to a brighter light." (Participant 2) "You know, I would wonder sometimes if I was hearing the noise from outside. I hope it's quiet." (Participant 3) Also, some participates suggested that they required a short break, perhaps with a light refreshment, in order to increase their attention and participation. "The session was long, and we should have had five minutes to rest in between." (Participant 1) Sub-theme 2: Facilitators Most of the participants reported that the facilitators performed at an expert level and their speech was clear and well structured. Participants observed that the facilitator were friendly, responsible, and were attentive and respectful listeners. "The class was very organized; the devices, such as the laptop, projector and practice booklet, were well arranged by those doctors and every time they came to pick us up from the ward to the classroom, they were always on time; I appreciated that." (Participant 8) However, the suggestions included a range of recommendations. The facilitator's slides should have included more interesting, flashy information and photos. Indeed, text-focused presentations may create the impression that the presentation is 'dry' or overly technical and boring. "There are too many texts in some slides, I found the class quite boring." (Participant 9) Furthermore, the facilitator should have allowed more time to answer questions from the class. "I had many questions on the medication section; I wanted to ask more questions to the teacher but the time was ending." (Participant 9) Also, it was suggested that the facilitators should improve their class-management skills, as participants reported discouragement or distress due to others' negative or irrelevant comments. "Sometimes the class was in a little bit of chaos and I felt irritable towards the group members who were speaking loudly and dominating the group." (Participant 1) Finally, it was suggested that facilitators could start each lesson with an offer to briefly review prior contents, as this could have enhanced understanding and retention. "However, some contents are easy to forget, I wish the facilitator had repeated things again at the beginning of the class." (Participant 8) Sub-theme 3: Time Participants preferred to attend the class in the afternoon rather than in the morning, due to sleepiness. "You know, we have to wake up at 6 am during the routine hospitalization. In the following 3 hours, I have to eat breakfast, read some books and play table tennis. Whenever the class started at 10 am, I felt tired and sleepy. So, I prefer to attend the 3pm class in the afternoon as I felt more energized then after my 2pm nap." (Participant 5) Most participants agreed that each session should take about 40 min. "Sometimes the class took longer than 40 minutes and I felt restless." (Participant 6) Sessions had intervals of 2 to 3 days; because if the gap between the sessions is too long, some participants may lose interest and patience in the program. Sub-theme 1: Group-versus individual-based intervention Most of the participants commented that the group-based psychoeducation was favorable to the individual psychoeducation intervention. They reported that group-based programs can help participants realise that they are not alone in suffering from BD. "I am not only the one who has bipolar disorder; many people around me have the same problems. Within the group, we can discuss bipolar disorder." (Participant 8) They reported benefits of exposure to a wider range of perspectives on their situation when hearing from other members, such as how to communicate with friends and relatives regarding their illness. Group therapy had a range of other benefits, such as participants being more likely to discuss and share the experiences on: medication use and side effects; treatment in the outpatient and inpatient unit; experience with family members and friends; and varying coping strategies that they found useful. However, a few participants suggested that individual therapy sessions would have been better because they experienced difficulties talking in a group with 8-10 people. Some participants felt embarrassed after sharing their own experiences about the illness with others, because someone in the group was laughing. "I was sharing my own experiences at the period of manic onset; I told everyone I earned salary of 10000 RMB in a month, my husband only earned 8000 RMB. I wanted to change my husband at that time….but then everyone in the group was laughing at me and I felt very embarrassed." (Participant 13) Two participates reported that they felt uncomfortable at first when discussing each other's problems in front of strangers. "I didn't know what to expect from the course and I felt very anxious initially…many strangers were siting around me." (Participant 3) Sub-theme 2: Class-taught versus discussion Most of the participants preferred to be involved in the facilitator-directed class discussion compared with a teacher-directed class. Their reasons included difficulty maintaining attention when all they could hear was the teacher talking. An effective way to improve interest may be to require participants to think and connect with the content. "If I only listened to the teaching talking, I would lose my interest in the class." (Participant 5) A few participants reported that they preferred teacher-directed classes, because some participants from the group were likely to dominate the group discussion. "She kept talking, talking and talking in the class, I felt restless and annoyed at her." Sub-theme 3: Family involvement Altogether, 80% of the participants mentioned that they would like to ask their family members to become involved in the program. Listening to the class would help some family members understand that they should not put too much pressure on BD sufferers, because some of the symptoms are due to biological or environment effects, rather than "stubbornness". "If my mom could come to the class that would be helpful, she could listen more about bipolar disorder, the medication I am taking and this would help her understand more about me….maybe she will change her attitude towards me, that I am not lazy." (Participant4) Some participants suggested that it was only necessary for their family to participate in some sessions, because some of the sessions are not relevant to them. "Yes, especially in the psychological session, the doctor taught us how to identify, understand, express and manage our emotion; if my husband was here, it would definitely improve our relationship." (Participant 13) However, some participants did not show this interest; one participant expressed that their family members lived far from the hospital and would have trouble with transportation and taking time off work. "That's too far, my dad is living in the Shandong province, I don't want him to come here everyday, you know traveling is difficult for him". (Participate 14) Two other participants reported that they had poor relationships with their mom/husband and would prefer not having them within the group. Sub-theme 4: Real stories sharing Some participants suggested that the course could include one or more patients with bipolar disorder who had been cured or who are effectively managing their illness. Hearing from such individuals could inspire hope and confidence that recovery is possible for others. "It's better to listen to some real stories from real people who have been cured, such as how they recovered from the bipolar disorder, what they did during the treatment and recovery period… which medication they think is the best….this information could help me feel more authentic". (Participant 15) Theme 4: Perception of learning content Most of the participants commented that the content was professional, well structured and useful. The following results are specific to each session. Sub-theme 1: Concept of BD Participants said that the session on "bipolar disorder" increased their knowledge of BD, such as the symptoms of depression, manic behavior, and etiology. "No one has ever told me what is bipolar disorder, and now I know what has happened to me." (Participant 10) Sub-theme 2: Treatment of BD Most of the participants expressed that they were willing to take different medications for Bipolar disorder, and that they understood how important the medications were as an adjunct to other psychotherapeutic intervention. "I don't like to take medication, I always hide the medication under my pillow and then the nurse found out and forced me to take it. But now I understand why I should take the medication." (Participant 13) Sub-theme 3: Psychological approach Some patients participated in psychological exercises, such as imagery relaxation and group-based mindfulness practice. These were experienced as being useful and patients kept practicing them in their daily life. This session also included some basic cognitive therapy. "The imagery relaxation made me feel comfortable; I could imagine that I was in the ocean with my lover. I also practiced it before going to sleep every night and it enabled me to really calm down." (Participant 9) Some participants expressed that they had learned how to replace some automatic negative thoughts and beliefs with new rational beliefs, which lead to their behavior becoming more reasonable. Particularly, two participants mentioned that they would continue to participate in cognitive behavioral therapies after their discharge from hospital. "In the past, I was always thinking that I was going to die, but I understand now this type of thought is called catastrophizing; catastrophizing means only expecting the worst outcome in everything, so actually I was not going to die." (Participant 7) Sub-theme 4: Self-management Some participants reported that they found the selfmanagement session useful, such as learning how to list triggers and monitor mood. Participants' emotions have become more stable as a result of the self-management session; they know how to recognise, understand, express and self-manage their emotions. "I feel that I can actually recognize the dangerous triggers in the different environment settings as a result of an increased awareness of my mood." (Participant 5) Sub-theme 5: Lifestyle management All participants reported that maintaining a regular bio-clock rhythm, being physically active, and abstaining from alcohol, reinforce the importance of a healthy diet and lifestyle. Sub-theme 6: Relapse prevention Some participants reported that the mood diary was the most helpful tool. They reported that monitoring their mood changes on a day-to-day basis would assist in making the most of their visit to see the doctors. Sub-theme 7: Suggestion for improving the learning content Some participants suggested ways in which the content could be improved. Participants complained that they still had fears of modified electroconvulsive therapy (MECT), and therefore they thought the program should include more details on MECT, such as its mechanism of operation, side effects, and information on anesthesia. "The doctor gave me the sheet but I totally had no idea how to do that."(Participant 13) Participants also suggested wanting to understand how to reduce or manage the side effects of their medication. "The side effects of the medication are common to everyone, not only me, but I still fear weight gain as result of taking long-term medication, I want to know how to reduce this side effect." (Participant 8) "I understand how important medication is to me, but the side effects….You know I am a girl and I am not married yet." (Participant 7) Some female participants suggested that the content should include information about bipolar disorder and pregnancy. "I have bipolar disorder and want to get pregnant but I want to know the risks and benefits of medications and forms of birth control." (Participant 13) Theme 5: Outcome of participating in program Sub-theme 1: Self-acceptance The majority of participants reported that the program reduced their feelings of embarrassment, shame and confusion regarding their illness. Hence, the program increased their acceptance of their illness. "I understand that my illness is not my fault." (Participant 11) "This is something I can't change by myself, but I should accept it." (Participant 15) Sub-theme 2: Self-confidence Some participants mentioned that they were struggling to speak up for themselves within their daily lives. However, whilst practicing skills introduced through group work, patients increased their confidence for confidently interacting with people outside the group. "I was struggling to speak up about my illness, but when someone was speaking up about their illness in the group discussions, that would give me the confidence to do the same." (Participant 8) "Since I started to share my own experiences in the group, I felt stronger within myself." (Participant 6) Sub-theme 3: Self-awareness Some participants reported that they had become more aware of warning signs of a bipolar episode, and this helped them address their mood when it was becoming excessively high or low. "Once I recognized the trigger and in the future I will know when my mood is becoming low or high." (Participate 15) Sub-theme 4: Self-motivation Some participants stated that they experienced increased motivation to participate in treatment and engage in lifestyle changes, and these feelings led to a sense of control. "My mom sent me to the hospital because I would always go on a shopping sprees... If I receive enough treatment, I won't waste the money again." (Participant 4) "I feel so sorry and regret towards my husband, I won't try to change my husband again despite my salary being higher than him, I will try my best to recover from the illness." (Participant 13) Sub-theme 5: Social support Some participants reported that they developed new friendships with group members and that they would continue to meet each other and offer social support at the end of the program. "I am not only getting support from my family, but also from the group members." (Participant 8) "She said I am her good friend, we did everything together during the hospitalization, we would wake up at the same time, go to the washroom, watch TV, sit together when eating breakfast and lunch." (Participant 6) Sub-theme 6: Relationship with the doctor Some participants mentioned that the program enhanced the relationship with their doctor and they also reported an increase in trusting their resident doctor at the end of program. "I was reluctant to talk to my resident doctor before attending this program, because I thought he had lied to me about my condition and that he just wouldn't let me go home but I found out I was wrong….we did have a pleasant conversation yesterday in the ward and I believed him." (Participant 12) Some participants said this change in trust may result in greater adherence to outpatient visits and check-up sessions. "If I keep up taking the medication that my doctor prescribes me then everything will be okay for me." (Participant 5) Perception of participating in the program Those patients who showed a higher understanding of their illness would encourage positive treatment outcomes, and this formed the main motivation to complete the program. The participants showed interest in recommending the program to others. These findings are in agreement with an earlier qualitative study in outpatient sample [8]. The patients recommended that the program should provide a comprehensive educational booklet. This may be due to some patients favouring written materials as they can then review them at any time [21]. Finally, participants expressed interest in further participating in follow-up studies after discharge from the hospital. Perception of the setting Most of the participants reported that the learning environment facilitated their attention and attendance, with the presence of light music enhancing their mood and focus while in class. The single-table seating led to feelings of equality and solidarity. Some participants suggested that the learning environment could improve, with suggestions including enhancing the ambient light and introducing breaks and refreshment, as well as ensuring a quiet and undisturbed environment. Participants reported preferring afternoon sessions due to morning sleepiness and the disruption to preferred daily routines. Instead of attending classes in the mornings, participants would like to attend classes during the afternoon, perhaps after a nap. Most participants agreed that each session should take about 40 min to achieve a balance between the content covered and the toll on attentional and emotional resources. All participants in our sample were presently in a recovery period from acute BD symptoms and therefore they could experience disruptions to mood during longer sessions. The present study agreed that each session should have intervals separated by 2 to 3 days as participants may lose interest and patience to the program if the gap between sessions is excessive. Our program delivered eight sessions in a period of two weeks. This might explain why the drop out rate was low in our study. Most of the facilitators were perceived as professional, friendly, respectful and responsible. There were suggestions for the facilitators to: (i) improve the powerpoint slides to include more interesting, flashy information, photos and videos, in order to enhance the class's learning atmosphere; (ii) expand even further upon main points offered in the lecture slides; (iii) allow more time to answer participants' questions; (iv) improve classmanagement skills to prevent dominant group-members offering disruptive or irrelevant content. Indeed, in a study by Poole, Smith and Simpson [8], it was found that participants dominating the group members was one of the reasons for dropping out the program. Lastly, participants suggested that the facilitators (v) should offer a revision of the previous session at the beginning of each new session, as participants reported difficulties remembering all the information. Perception of the learning content The content and the lecturer were both judged of being of expert quality, clear and well structured. Participants especially enjoyed the sessions that included: definition of bipolar disorder; a treatment approach; a psychosocial approach; self-management; and healthy lifestyle in order to prevent relapse. However, the lack of detailed knowledge about MECT caused some participants to feel anxious. Hence, participants suggested that the MECT session should cover the mechanisms of operation, side effects, anxious relaxation and some basic knowledge of anesthesia. More information about bipolar disorder and pregnancy and how to reduce the side effects from the medication were requested from female participants. Female patients appeared more prone to worry about their weight gain as result of taking medication. Excessive weight gain was the most common cause of non-adherence of treatment [22]. Crucially, further instruction and practice on mood charting is necessary as most of the participants reported that they did not know how to complete the exercise sheet at the end of program. Perception of participating in a group Most of the participants stated that the group-based psychoeducation was preferred over individual intervention. This result supports earlier findings, as learning that the patient is not alone in suffering the illness can alleviate feelings of shame and isolation [23]. Participants were able to learn how to communicate with their friends and relatives from interacting with the group. In the Chinese society, the role of "teacher" has long been respected. Chinese people think the teacher is the expert they can trust and rely to receive valuable suggestions [24]. Interestingly, there are psychotherapy studies, such as those including cognitive behavior therapy, suggesting that didactic teaching is the desirable approach during the treatment process, since it can improve the trust relationship between patients and psychotherapists [24][25][26]. Surprisingly, in our study, most of the participants preferred to be involved in a facilitator-directed class discussion rather than in didactic teaching. Some participants felt embarrassed after sharing their own experiences as some group members were laughing at them. In order to partially address these concerns, perhaps the facilitator should remind participants of the principle of information confidentiality. Also, the facilitator should emphasize the key points from the slides by asking questions to participants, hence refocusing any discussion that approaches an irrelevant topic. For those participants who feel uncomfortable during initial sessions, the facilitator should provide a more natural icebreaker exercise [8]. Many participants suggested that involving family members would be useful, as this would promote an understanding of the patient's illness and their current condition. However, whether the relatives should become involved in the program depends on the patients' personal and financial situation. Poole, Smith and Simpson [8] suggested that future programs could provide sessions specifically for family member to become involved in the therapeutic process. Lastly, some participants suggested that the program should invite patients who are effectively managing or have been cured of BD. Sharing their treatment and recovery experiences with the group may encourage hope and confidence within these group-members. This point also was raised in previous study [8]. Outcome of participating in the program Poole, Smith and Simpson [8] demonstrated that a UK-based group pychoeducation intervention had positive impact on the patient's knowledge, acceptance, social support and attitude towards taking medication. Our results are comparable to findings from European patients, as participants reported that the program increased their personal acceptance of their illness, reducing their feeling of confusion and self-doubt. Patients became more confident to communicate with the people outside the group, which can reduce self-stigma and promote prosocial behaviours. A number of participants stated that they have a strong motivation to stay in treatment and experience lasting changes as a result. Some participants reported that they would continue to meet each other at the end of the program, therefore increasing their social support. Some participants increased their awareness as they became more capable to recognise personal triggers. Others reported that the program enhanced their "trust relationship" with their doctor. A more trusting relationship could increase compliance and help achieve better treatment outcomes [27]. Strengths and limitations The aim of this study was to qualitatively explore experiences of group-based psychoeducation for BD-I via thematic analysis. The data analysed during the process evaluation helped the authors understand the strengths and weaknesses of the program. This addresses a gap within existing literature regarding the efficacy of group psychoeducation for individuals with BD. This study is the first to assess group-based psychoeducation programs for inpatients following remission of a manic episode in bipolar I disorder within a Chinese sample. The findings indicate that the program was helpful for patients with BD-I. Patients offered a range of recommendations that future research on group-based psychoeducation programs should explore. There are some limitations in this study: the methodological design did not include direct questions asking for participants' potential discomfort and non-beneficial aspect of the program; our sample had a large proportion of female patients, hence future studies may wish to explore differences in group dynamics across a range of gender ratios; family was not involved in this study, so future research needs to explore family-based intervention for BD in Chinese populations; finally, since this study was conducted on a single hospital ward, future research may consider implementing this intervention across multiple sites and include community based psychoeducation programs to explore its generalisability. Clinical implications Future programs should include further details about MECT and more information about BD and pregnancy for female patients. More support from facilitators is required when patients work on mood charting. Importantly, participants may benefit from less didactic teaching and more discussions during each session. Finally, it would be helpful for patients to have access to a comprehensive education booklet, which they can then review at any time, at their own pace. As recommended by many participants, future psychoeducation education programs should involve family members, as this could help them better understand the patients' illness and current condition. Future programs may also consider to invite BD patients who have been successfully treated, as sharing their treatment and recovery experiences with the group may encourage hope and confidence. Our study demonstrated that a culturally adapted group-based psychoeducation program for BD is feasible and acceptable in a Chinese population. Different social and cultural backgrounds could impact on the patient's perception of their symptoms and services engagement [28]. The role of Confucianism in the Chinese culture as well as its collectivistic tradition discourages open displays of emotions in order to maintain social and familial harmony or avoid exposing personal's weakness/ perception. As a result, some Chinese patients are reluctant to share negative feelings due to fears of creating interpersonal conflicts (and hence damage their connections with others) and bring shame to their family ("lose face"), which could result in the avoidance of interactions in group interventions. For instance, Lin [25] indicated that Chinese American patients may experience difficulties with straightforward discussions about some problems between family members during cognitive behavior therapy. Thus, when working with Chinese patients, the notion of "saving face" is an important element that needs to be considered through the course of intervention [24]. In our study, we found that most of the participants were willing to discuss and share personal experiences with others in the group. Facilitators can therefore educate patients to recognize the importance of actively engaging in group discussions in the beginning and during the course of the sessions. Also, culturally adapted interventions are an effective way to facilitate group discussions and promote a good trusting relationship, such as using local folk stories, idioms, image and examples from religious [28]. Conclusion This qualitative study provides positive evidence on the benefits of a group-based psychoeducation program in enhancing patient knowledge, confidence, acceptance, motivation, social support and a trusting relationship with the doctor. Based on the benefits of our program on patients, this intervention could potentially be incorporated into routine psychiatry inpatient care. Therefore, it is suggested that policy makers should consider placing more efforts on the provision of more accessible psychoeducation interventions to patients with bipolar disorder in China. Abbreviations BD: Bipolar disorder; BD-I: Bipolar disorder type I
2018-01-31T06:20:59.513Z
2018-01-30T00:00:00.000
{ "year": 2018, "sha1": "270b0e3c022e3effd2424a4308d5cd75a0387c1a", "oa_license": "CCBY", "oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/s12888-018-1614-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "270b0e3c022e3effd2424a4308d5cd75a0387c1a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
72784236
pes2o/s2orc
v3-fos-license
Padwa Howard, Social Poison: The Culture and Politics of Opiate Control in Britain and France, 1821–1926 (Johns Hopkins University Press, 2012), pp. 248, $55.00, hardcover, ISBN: 9781421404202. terms, as a series of works of short fiction, rather than as windows into a universe existing beyond the stories. The Scientific Sherlock Holmes is unlikely to be of great interest to readers of this journal. It does not engage with current scholarly debates about Sherlock Holmes, science and wider Victorian culture. It is based almost entirely on works of secondary literature, many of which are not scholarly. While it does not claim to be a history book, it does contain some passages of historical exposition. However, a number of these appear to be based on quite perfunctory research. For example, in a passage discussing the rate of uptake of fingerprinting by police, O’Brien cites the failure to analyse a bloody handprint found at the scene of the murder of Marion Gilchrist in 1909 (which led to the notorious conviction of Oscar Slater) as evidence that ‘as late as 1909, Scotland Yard was not totally using fingerprinting’ (p. 52). However, because the Gilchrist murder took place in Glasgow, it was not investigated by Scotland Yard (a colloquial term for the London Metropolitan police). The case cannot therefore be used justifiably as evidence for their use (or not) of fingerprinting. The book also contains some errors in its scholarly apparatus. At least two citations from the main body of the text do not appear in the bibliography. Excessive concentration on happenings or references that are not fully explained in the stories means that there is less space which can be devoted to addressing the interesting questions arising in the reader’s mind, which is a shame. I would have liked to learn more about what it was about Victorian and Edwardian culture which so valued a detective with scientific and deductive credentials, rather than be subjected to a two-page demonstration of Holmes’s aptitude in mental arithmetic. Why did so few of the stories feature the detailed examination of the corpse, despite Conan Doyle’s medical background? Did the changing portrayal of the police in the stories reflect wider cultural shifts? There is plenty of scope for a thoughtful, engaging work on science and Sherlock Holmes. Unfortunately, The Scientific Sherlock Holmes disappoints in this regard. terms, as a series of works of short fiction, rather than as windows into a universe existing beyond the stories. The Scientific Sherlock Holmes is unlikely to be of great interest to readers of this journal. It does not engage with current scholarly debates about Sherlock Holmes, science and wider Victorian culture. It is based almost entirely on works of secondary literature, many of which are not scholarly. While it does not claim to be a history book, it does contain some passages of historical exposition. However, a number of these appear to be based on quite perfunctory research. For example, in a passage discussing the rate of uptake of fingerprinting by police, O'Brien cites the failure to analyse a bloody handprint found at the scene of the murder of Marion Gilchrist in 1909 (which led to the notorious conviction of Oscar Slater) as evidence that 'as late as 1909, Scotland Yard was not totally using fingerprinting' (p. 52). However, because the Gilchrist murder took place in Glasgow, it was not investigated by Scotland Yard (a colloquial term for the London Metropolitan police). The case cannot therefore be used justifiably as evidence for their use (or not) of fingerprinting. The book also contains some errors in its scholarly apparatus. At least two citations from the main body of the text do not appear in the bibliography. Excessive concentration on happenings or references that are not fully explained in the stories means that there is less space which can be devoted to addressing the interesting questions arising in the reader's mind, which is a shame. I would have liked to learn more about what it was about Victorian and Edwardian culture which so valued a detective with scientific and deductive credentials, rather than be subjected to a two-page demonstration of Holmes's aptitude in mental arithmetic. Why did so few of the stories feature the detailed examination of the corpse, despite Conan Doyle's medical background? Did the changing portrayal of the police in the stories reflect wider cultural shifts? There is plenty of scope for a thoughtful, engaging work on science and Sherlock Holmes. Unfortunately, The Scientific Sherlock Holmes disappoints in this regard. These days, the smart money -actually, pretty much all the money -is on addiction being a chronic, relapsing brain disease whose cure will involve repairing or mitigating organic lesions. But many of the most important aspects of addiction are forged not neurochemically but socially and culturally: the line between addiction and other forms of chronic behaviour; how much addicts should be held responsible for their actions; the impact of 'structural' factors (eg., racial segregation) on the epidemiology of addiction; and, not least, how authorities should understand and respond to the social problems associated with addiction. Such judgements have profoundly marked experiences with unproblematically biomedical illnesses such as tuberculosis, cancer and the flu, and the same remains true for addiction, too, no matter how the science plays out. This is why we need books like Howard Padwa's Social Poison: The Culture and Politics of Opiate Control in Britain and France, 1821-1926, which analyses the cultural processes that produced two dramatically different responses to the same biological phenomenon -which, in turn, meant dramatically different experiences for addicts. Nicholas Padwa argues that writers, politicians and other commentators were as significant as medical researchers in establishing beliefs about opiates and addiction. The English writer Thomas De Quincey, for example, was the most powerful articulator of prevailing beliefs in the nineteenth century, helping cement opium's reputation as an 'emblem and pastime for outcasts and recluses, individuals who were, paradoxically, united by their radical individualism and passion for personalized reverie' (p. 48). This reputation was more of a problem in France than in Britain, where individualism and productivity were understood as hallmarks of good citizenship. Strongly coloured by their national interest in continuing the opium trade between colonial India and China, British cultural and imperial authorities ultimately concluded that opium was 'not necessarily incompatible with an industrious and self-sufficient lifestyle' (p. 66). The French, however, placed a much higher importance on 'the collectivity of the nation as a more active player in the cultivation of individual liberty and prosperity' (p. 70). Novelists helped draw cultural links between addiction among French soldiers and treason, which were then given greater currency by political discourse around actual episodes such as the Ullmo Affair. As a result, France ended up with the more incendiary panic over addiction, despite apparently higher rates of opiate use in Britain. These national understandings of addiction shaped drug policy as it emerged in France and Britain in the years around World War I. British authorities confronted addiction only as an economic issue: lax laws made their nation a haven for smugglers, who created problems for legitimate trade interests. As a result, 'repressing the international drug traffic, more than limiting domestic drug use', was their principal aim (p. 109). France, on the other hand, cracked down on addiction itself, criminalising not only the sale and use but 'encourag[ing] the possession or illegal use of opium' (p. 115). Ultimately, Padwa argues, British authorities saw opiate control as 'largely a means to an end' whereas for French authorities 'drug control was an end in itself' (p. 138). Despite these difference, the 1920s saw a similar situation in both nations: plenty of addicts with no easy access to drugs, but also no realistic hope of 'cure'. In Britain, addicts and physicians successfully marshalled cultural arguments about individual liberty and productivity to defeat anti-drug crusaders and secure the famed 'British System' of tightly controlled medical maintenance. French doctors made the same pleas on behalf of addicts but they fell on deaf ears. Much later, both nations would adjust their regimes to circumstance: tightening controls in the countercultural 1960s, and then expanding maintenance and public health approaches (a much more dramatic shift in France) in the age of HIV/AIDS. But throughout, national self-perception powerfully shaped each stage of drug control. That said, this is a valuable story: deeply researched, consistently insightful in analysis and crisply written. The comparative model is an excellent way to show the fundamental importance of social and cultural factors in shaping the history of addiction, and Padwa uses it deftly. Today's addiction researchers would do well to read this book and think about the cultural (and moral) assumptions that frame their own work, and that will most assuredly frame any policies that rely on their conclusions. The representation of national cultures can be strangely static in this otherwise compelling story. They appear like fields naturally generated within a nation state, homogenous enough that almost any discourse produced within it -poetry, medical journals, novels, etc. -provides evidence of the same underlying structure. But culture is better understood as a verb rather than a noun, with discourse serving not as evidence of what is generally believed, but as competing and dynamic efforts to create general beliefs. In light of the recent 'transnational turn', this is especially important to recognise in nationalist discourses, which should be understood not as pre-existing characteristics but contested cultural projects. This suggests a slightly different question: why did drugs play such a prominent role in the project of establishing national identities in Britain and France? This is obviously not the question Padwa set out to answer, and while cultural historians might wish he had, its absence in no way detracts from the many valuable contributions of this excellent book. Stress is a paradoxical business. As the popularity of self-help books promising to help us 'manage' and 'master' stress attest, stress is widely regarded as the cause of a host of social and medical ills. Yet at the same time as we worry about stress, we also thrive on it -hence the popularity of competitive games and the compensating demand for books with titles like 'The Joy of Stress'. David Herzberg How we came to be living in 'the age of stress', and what we actually mean when we talk about being 'stressed-out', are the central themes of Mark Jackson's new book. Although Jackson begins by acknowledging the central role of stress in his own life (alongside the usual difficulties of writing a book, Jackson also had to contend with his study being flooded in the middle of his sabbatical), this is definitely not a self-help manual. On the contrary, Jackson's aim is trace the history of scientific concepts of stress and what he refers to in his subtitle as 'the search for stability'. Both a medical condition and a cultural metaphor, stress -like depression and anxiety -appears to be ubiquitous in modern society, yet, Jackson writes, we 'know little about its historical trajectory' (p. 2). As a term, stress is also somewhat elusive, encompassing both environmental and socio-economic conditions and psychological and emotional concepts. Jackson argues that the modern obsession with stress can be traced to around 1983 when Time magazine ran an article entitled, 'Stress: can we cope'. Then, as now, stress was regarded as an endemic disease of post-industrial capitalist societies, a condition that owed its prevalence to the 'chronic strains of life'. But as medical historians well know, there is nothing new about this supposed link between stress and the pressures of industrial lifestyles: in the 1880s Victorian medical commentators also sought to blame epidemics of fatigue and nervous conditions like neurasthenia on 'overwork' and 'overstrain'. The difference, Jackson suggests, is that in the 1880s stress was a vague, mechanistic concept -'an algebraic product' (p. 53) of heredity and external forces acting on the nervous system. It was not until the 1930s that stress became an object of medical research in its own right and began to be seen as a pathophysiological process that could be elucidated with the tools of modern science.
2016-06-02T04:50:14.374Z
2013-09-23T00:00:00.000
{ "year": 2013, "sha1": "3871bb108a4470fb0fe46d1aaa9d05be27134e4f", "oa_license": null, "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1A9EDFE371868680DD328D707A9304D8/S0025727313000598a.pdf/div-class-title-howard-padwa-span-class-italic-social-poison-the-culture-and-politics-of-opiate-control-in-britain-and-france-1821-1926-span-johns-hopkins-university-press-2012-pp-248-55-00-hardcover-isbn-9781421404202-div.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3871bb108a4470fb0fe46d1aaa9d05be27134e4f", "s2fieldsofstudy": [ "History", "Art" ], "extfieldsofstudy": [ "Medicine" ] }
268570022
pes2o/s2orc
v3-fos-license
Recent insights from non-mammalian models of brain injuries: an emerging literature Traumatic brain injury (TBI) is a major global health concern and is increasingly recognized as a risk factor for neurodegenerative diseases including Alzheimer’s disease (AD) and chronic traumatic encephalopathy (CTE). Repetitive TBIs (rTBIs), commonly observed in contact sports, military service, and intimate partner violence (IPV), pose a significant risk for long-term sequelae. To study the long-term consequences of TBI and rTBI, researchers have typically used mammalian models to recapitulate brain injury and neurodegenerative phenotypes. However, there are several limitations to these models, including: (1) lengthy observation periods, (2) high cost, (3) difficult genetic manipulations, and (4) ethical concerns regarding prolonged and repeated injury of a large number of mammals. Aquatic vertebrate model organisms, including Petromyzon marinus (sea lampreys), zebrafish (Danio rerio), and invertebrates, Caenorhabditis elegans (C. elegans), and Drosophila melanogaster (Drosophila), are emerging as valuable tools for investigating the mechanisms of rTBI and tauopathy. These non-mammalian models offer unique advantages, including genetic tractability, simpler nervous systems, cost-effectiveness, and quick discovery-based approaches and high-throughput screens for therapeutics, which facilitate the study of rTBI-induced neurodegeneration and tau-related pathology. Here, we explore the use of non-vertebrate and aquatic vertebrate models to study TBI and neurodegeneration. Drosophila, in particular, provides an opportunity to explore the longitudinal effects of mild rTBI and its impact on endogenous tau, thereby offering valuable insights into the complex interplay between rTBI, tauopathy, and neurodegeneration. These models provide a platform for mechanistic studies and therapeutic interventions, ultimately advancing our understanding of the long-term consequences associated with rTBI and potential avenues for intervention. Introduction Traumatic brain injury (TBI) affects an estimated 69 million people each year (1) and impose an economic burden on the world economy of over $400 billion (2).In the United States, more than 472,000 military service members sustained at least one brain injury between 2000 and 2022, with many reporting head injuries before service (3).Studies have shown that TBI is an environmental risk factor for neurodegenerative diseases including Alzheimer's disease (AD) and other dementias (4,5) while those with repeated head trauma are at risk of developing chronic traumatic encephalopathy (CTE) (6).A previous head injury increases the risk for a subsequent head injury; thus, greater than 260 per 10,0000 military service members experience subsequent head injury within 1 year of an initial TBI (7).Athletes participating in high-contact sports are at risk for repeated head trauma, and exposure to repetitive TBIs (rTBIs) is common in professional athletes (8,9).Some American football linemen experience nearly 2,000 impacts over the course of their career (8,9).While repeated head trauma is commonly linked to contact sports such as football and boxing, it is also evident in the context of intimate partner violence (IPV).Thirty to 94% of women experiencing IPV report at least a single brain injury, with an estimated 80-90% of women sustaining injuries to the head and neck (10,11).Those who experience brain injuries from IPV may report chronic cognitive impairments in memory and learning (12).Over time, the accumulation of these traumatic events may lead to the development of CTE, a progressive, neurodegenerative disease induced by repeated blows to or rapid displacement of the head, producing chronic changes in cognition, memory, and mood (6).Emerging literature suggests that neurodegenerative changes may occur in women who have experienced IPV, including a recent case study where CTE-like pathology was reported (13,14).The link between CTE and repeated trauma in athletes is well-established.In a convenience sample of 202 deceased American football players, 87% were diagnosed post-mortem with CTE; the affected percentage was higher (99%) when the sample was restricted to NFL players (15).In a post-mortem study of rugby and soccer players, eleven experienced repeated head trauma, and CTE pathology was found in eight of eleven (16).Despite attempts to use neuroimaging as a mechanism to identify and diagnose CTE before death, the formal diagnosis of CTE occurs only upon autopsy and no effective therapeutic interventions exist to prevent or mitigate neurodegeneration following rTBI. Mammalian species, such as rats, mice, and pigs, have been used to model TBI and other neurodegenerative diseases including CTE to elucidate long-term outcomes.Although they have provided key insight into numerous secondary injury mechanisms and therapy development (17)(18)(19), there are several limitations to the existing literature: (1) lengthy observation periods of the model organism, (2) high cost for experimentation and associated costs, (3) relatively difficult and lengthy genetic manipulations, and (4) ethical concerns regarding a large number of mammals experiencing pain and debilitating injury.For these reasons and others, over the past several decades, researchers have initiated non-mammalian models such as fruit flies (Drosophila melanogaster; Drosophila), nematodes (Caenorhabditis elegans; C. elegans), zebrafish (Danio rerio) and sea lampreys (Petromyzon marinus) to model human neurodegenerative diseases and to map the etiopathogenesis of aberrant tau formation after TBI (20).Lower-order vertebrate and invertebrate models offer important potential benefits to studying TBI-induced neurodegeneration, including shorter lifespans to study endpoints, vast genetic tools to manipulate the expression of genes of interest, high-throughput analysis to identify genetic and biochemical networks, screening techniques to identify potential therapeutics, and reduced cost.Here, we highlight the use of non-vertebrate and aquatic vertebrate organisms to define the basic mechanisms underlying repeated TBI and to model rTBIinduced neurodegeneration. Mechanisms of acute and repeated traumatic brain injury 2.1 Primary injury As a result of TBI, two separate injuries occur on impact, a primary injury which causes a secondary injury to unfold in the minutes to hours after the initial impact.Blast injury, penetrating injuries, direct impact, and rapid acceleration and deceleration forces can injure the brain, producing a primary injury (21,22).Within milliseconds, the primary impact produces TBI causing brain tissue to undergo rapid movement and tissue deformation (23).The primary injury leads to the shearing of white matter tracts, resulting in the formation of focal contusions as well as intra-and extracerebral hematomas (22). In closed-head trauma, mechanical force transmits energy to neurons and glia, which may cause traumatic disruption of CNS structures, disturbances in circulatory autoregulation, impairment of the blood-brain barrier (BBB), and acute cellular dysfunction (23)(24)(25).The brain is particularly vulnerable to mechanical force because of its viscoelastic nature and lack of structural support; therefore it is ineffective in withstanding the mechanical forces from a blow to the head (26).Linear acceleration forces exerted during traumatic events can lead to the formation of superficial brain lesions, whereas rotational forces rotate the brain around a fixed axis (27,28).These rotational forces impart damage to deeper cortical structures (26,29).Translational forces, specifically linear acceleration forces, impart damage to superficial gray matter, generating cerebral hemorrhages and cortical contusions (27,30).In contrast, rotational forces mechanically and physiologically damage the deep cerebral white matter axons, resulting in diffuse axonal injury (27,28).It is hypothesized that axons are further damaged when rapid acceleration and deceleration forces promote the dissociation of tau from microtubules by altering microtubule dynamics, leading to subsequent tau hyperphosphorylation and aggregation (31,32).However, others suggest that tau hyperphosphorylation occurs first, altering microtubule dynamics, and affecting its association to microtubules (33,34).Multiple exposures to blast force also result in an accumulation of pathological tau aggregates in the brain (35).Rapid distortion of neuron shape may also induce tau hyperphosphorylation, resulting in tau mislocalization (36).Collectively, these studies suggest that force from a primary injury, at least in part, contributes to the development of neurodegenerative tauopathies. Secondary injury The biochemical and cellular responses to the initial impact produce additional damage to the brain, resulting in a secondary injury.Following a primary injury, massive disturbances in brain metabolism, neuroinflammatory responses, microstructural changes, and behavioral changes occur reviewed in (37).Often a consequence of injury, disruption of neuronal and glia osmotic control drives cellular edema, the predominant form of brain edema immediately following TBI (38).Brain edema likely exacerbates injury by increasing cytotoxicity and promoting cell death (39).It is hypothesized that following trauma, extracellular glutamate rises, initiating activation of N-methyl-D-aspartate (NMDA) receptors which promotes the influx (41). Oxidative stress damages brain tissue by supplying an excess of reactive oxygen species (ROS) and reactive nitrogen species (RNS) (42).These free radicals disrupt cellular function and preferentially lyse the hydrophobic portion of the lipid bilayer (42).Oxidative stress can oxidize amino acids, resulting in protein modification and loss of catalytic function (43).Protein modifications lead to severe protein aggregation within hours of post-ischemic injury (44).Endogenous antioxidants such as glutathione (GSH) play a vital role in protection against ROS and RNS.Depletion of GSH exacerbates brain infarction following cerebral ischemia (45,46).After TBI in rodents, GSH decreases in the hippocampus, potentially leading to apoptotic neuronal death (46). Neuroinflammation, while it can promote recovery during a limited period, also contributes to the pathophysiology of secondary injury by exacerbating damage.The normal BBB prevents the entry of hydrophilic molecules through tight and adherens junctions between endothelial cells (47,48).Following TBI, the BBB can be disrupted, recruiting leukocytes (49).The damage also activates resident microglial cells, which can remain in an activated state for years following TBI (50, 51).Chronic inflammation following traumatic brain injury increases axonal degeneration and neuronal loss (52, 53), and the resulting injury and brain dysfunction may have a delayed onset and persist long-term, leading to dementia or CTE.Microglia, along with astrocytes, participate in "reactive gliosis, " an aggressive response to neurotrauma involving enlarged glial cells in damaged brain areas (54).Microglial cells function like peripheral macrophages and secrete proinflammatory cytokines and chemokines (55).In both post-brain injury and neurodegenerative disease such as AD, resident immune cells like astrocytes and microglia are elevated (53), implicating inflammation as a potential link between the two phenomena. Recent evidence suggests that activated microglia can have detrimental effects as they directly correlate with the extent of tau pathology (55,56) and can increase amyloidogenic amyloid precursor protein (APP) production (57).Cherry and colleagues investigated the relationship between neuroinflammation and CTE and found that the duration of repeated head injury exposure predicted the activated microglial cell density and subsequent greater hyperphosphorylated tau pathology (58).The increase in aberrant APP proliferation eventually leads to the amyloid beta (Aβ) plaques that have been previously associated with AD (59), emphasizing the role of neuroinflammation in the development of continuing injury long after TBI occurs.However, several models of TBI in rodents demonstrate a reduction in amyloid beta plaques following TBI (60,61), and one study showed that mice overexpressing amyloid precursor protein had a rise in unaggregated Aβ in the hippocampus with extensive hippocampal neuronal death, thereby suggesting that the plaques may be protective against unaggregated (Aβ) toxicity unclear (62), though it remains.Therefore, a more thorough understanding of the complex mechanistic underpinnings of amyloidogenesis and tauopathies must be explored. Acute and repeated brain trauma Several studies highlight the different responses to single as opposed to multiple or repeated head injuries by characterizing the immediate and delayed effects on brain metabolism, neuroinflammatory responses, microstructural changes, and behavioral changes.Following a single mild TBI in mice, glucose utilization in the hippocampus and sensorimotor cortex increased in the first 3 days following injury, while rTBI (a second injury 3 days following the first injury) failed to elicit the same immediate response (63).However, after 20 days, rTBI mirrored single head injury with respect to glucose utilization (63), indicating a delayed effect on brain metabolism after rTBI.Moreover, axonal degeneration, increased glial activation and proinflammatory cytokine gene expression were detected 40 days after initial repeated injuries, highlighting the prolonged neuroinflammatory responses present after repeated but not single injuries (63).Studies in mammals demonstrate that a single TBI is associated with transient increases in hyperphosphorylated tau (64), while depositions of hyperphosphorylated tau aggregates were associated with rTBI (58,65).Additionally, chronic mild rTBI increased tau abundance within the gray matter up to 3 months following injury (66), and rTBI led to increased phosphorylated tau than a single mild TBI (67).This evidence suggests that acute and repeated injuries have distinct temporal patterns of glucose utilization, neuroinflammatory responses, and tau hyperphosphorylation. Since prolonged neuroinflammatory responses are associated with an increased risk of neurodegenerative disease (68), this evidence suggests particular mechanisms that might be invoked to explain neurodegeneration following repeated, non-disabling head trauma. Microstructural and behavioral changes also occur after rTBI.Multiple head injuries resulted in more severe microstructural changes, cortical volume loss, behavioral deficits, and histopathological alterations compared to single injuries (69).Jamnia et al. (70) demonstrated persistent memory deficits and structural changes in the cortex and corpus callosum in rats exposed to repeated concussions--three injuries, 48 h apart (70).These rats also experienced deficits in behavior, exhibited anxiety and increased corticosterone levels following rTBI (70).When piglets experienced one high-level rotational injury versus one high-level rotational injury with four subsequent mid-level rotational injuries administered 8 min apart, the multiple rotation injury group experienced greater gait times 1 day post after injury (71).Overall, gait patterns were normal in the single rotation group but were abnormal following the additional rotations (71), suggesting the long-term effect of repeated rotational brain injury on locomotor behavior.Recent studies underscored the accumulating nature of symptoms in adolescents with repeated concussions, with higher symptom scores observed after the second concussion compared to the initial one (72).Following a second concussion, patients reported an increased burden of neuropsychiatric symptoms, particularly in cognitive, sleep, and neuropsychiatric domains (73).Collectively, these studies emphasize the importance of considering the cumulative effects of repeated head injuries, with potential long-term consequences on brain structure, function, and behavior. Tau's role in neurodegenerative disease A recent NINDS consensus document indicated that CTE is likely to occur in the years to decades following rTBI; pathognomonic lesions of tau hyperphosphorylation occur in the cortical sulci surrounding small blood vessels (74).While many areas of the brain may be affected by rTBI, the hippocampus, an important structure for memory and cognition, may be particularly vulnerable to subsequent injuries following a concussion-like injury, leading to changes in mood, memory, and anxiety regulation (75)(76)(77)(78).The exact mechanisms by which these cognitive changes are triggered by repeated concussion (rather than physical disruption of neural tissue) remain unclear, though several studies have suggested that neurotoxicity, functional impairment of neuronal synapses, and axonal stabilization by aberrant microtubule-associated protein (MAP) tau may contribute to memory impairment and loss (79, 80) (Figures 1A,B).Tau is a crucial protein in the central nervous system (CNS) involved in the stabilization of microtubules and regulation of axonal transport (81,82), and its accumulation, hyperphosphorylation, and aberrant localization are recognized as hallmarks of CTE (74).In humans, six different isoforms of tau are produced in the adult brain.These arise via alternative splicing at its amino-and carboxy-terminal ends (Figure 2).Once phosphorylated on multiple sites (e.g., Ser 356 , Ser 396 , Thr 231 ), tau loses the ability to bind microtubules (33,83,84), thereby promoting microtubule depolymerization and instability. Abnormal phosphorylation of human tau (hTau) by both non-proline and proline kinases results in insoluble and misfolded tau, leading to the aberrant accumulation and aggregation of filamentous tau polymers, known as paired helical filaments (PHFs) and neurofibrillary tangles (NFTs), two features of CTE (74).Following and perhaps due to the formation of NFTs, neuronal degeneration and death result in release of tau into the extracellular space (90).In turn, this promotes tau uptake into astroglia (91).Some studies have even suggested that the spread of tau through glia cells mirrors a prion-like spread, though whether the misfolded tau actually promotes subsequent local misfolding of the normal trans isomer of tau has not been investigated (84, 92). As noted, the configuration of tau in the trans form is the physiological conformation.In contrast, the cis conformation of aberrantly phosphorylated tau (p-tau) has been linked to pathogenesis in neurodegenerative disease and of cognitive symptoms (65, 93,94).In a rodent study of impact and blast injury, the appearance of the cis configuration of hyperphosphorylated tau was associated with neurotoxic effects and spread to regions contralateral to the injury, associated with cognitive impairment (65, 94).When targeted with a monoclonal antibody against cis p-tau, neuronal apoptosis was prevented, suggesting that accumulation of cis p-tau is very early in the pathogenic sequence of post-TBI neurodegeneration (94).PIN1, a peptidyl-prolyl isomerase, plays a role in isomerizing threonine proline bonds at multiple sites (95-97) including those in tau.However, only the isomerization at the phosphorylated Thr 231 -Pro 232 bond in tau is associated with a biological phenotype (98).The isomerization of p-tau at Thr 231 -Pro 232 from cis to trans, promotes both dephosphorylation of tau by PP2A and microtubule stabilization (99).Depletion of Pin1 results in apoptosis and mitotic arrest (100).In an AD model, paired helical filaments contribute to neuronal death (101).Several studies demonstrate that upon restoring the prolyl isomerase in a cell model, Pin1 promotes microtubule binding and stability in vitro as well as dephosphorylation at amino acid site Thr 231 (101,102).The specific anatomic sites of where tau hyperphosphorylation is found and the pattern of neuronal spread can The function of tau in neurons and its role in brain injury.(A) In a healthy neuron, tau plays a vital role in stabilizing and supporting axonal transport by binding to microtubules and suppressing microtubule depolymerization (81,82).The phosphorylation state of tau influences its binding affinity to the microtubule with hypophosphorylation supporting a tighter bind (33,83,84).(B) Brain injury triggers various cascades, leading to the hyperphosphorylation of tau by protein kinases (85).This hyperphosphorylated state disrupts the binding of tau to microtubules, causing microtubule instability and depolymerization (33,83,84).Consequently, tau undergoes filamentous aggregation, forming pathognomonic lesions characteristic of chronic traumatic encephalopathy (CTE), such as neurofibrillary tangles (86).Image created using BioRender.com.Throughout this review, "CTE-like" will be used to describe models that recapitulate the phosphorylation and aggregation profile of tau in human CTE, but for which meeting the criterion that tau aggregates occur in the sulci is not possible because the brain is lissencephalic.Since there is currently no treatment to prevent or mitigate CTE or other forms of neurodegeneration after TBI, researchers have focused on two main drivers of injury-induced sequelae, Pin1 and tau.Lu et al. ( 101) have produced anti-Pin1 antibodies to restore the function of phosphorylated tau (101) in vitro but have not yet extended the studies to in vivo CTE-like models.In recent years, the literature has turned its focus to anti-cis tau antibodies that work to clear phosphorylated tau plaques in AD, CTE, and severe TBI animal models (93,94,106) and have reported improved outcomes in vivo.Albayram and colleagues found that repetitive mild injuries led to more severe phosphorylated cis tau and tangle-like structures which resemble CTE-like pathology.Treatment with anti-cis phosphorylated tau led to the elimination of cis phosphorylated tau and total tau accumulation (93).A clinical trial for the use of antibodies targeting cis-hyperphosphorylated tau at Thr 231 is currently underway (107).Perhaps these promising developments in antibody-based therapies will result in effective treatments for CTE and other related tauopathies. Non-mammalian models of neurodegenerative disease For decades, researchers have used rodent models to recapitulate traumatic brain injury and its subsequent sequelae.Difficulties in modeling acceleration and deceleration forces limited the rodent models to specific features of TBI and led to the creation of contusion injury models, namely controlled cortical impact and fluid percussion models (108).The rapid increase in molecular and genetic techniques, in addition to the commercial availability of transgenic rodents and materials, make rodents an attractive substitute for large animal models of brain injury (109).However, large animal models of traumatic brain injury can model a more dynamic range of TBI, namely replicating features of acceleration/deceleration forces (110), which are limited in rodent models.Additionally, rodents have lissencephalic brains with no rigid tentorium cerebelli (109), which restrict the modeling of neurodegenerative diseases with pathology localized to the sulci in the brain.In particular, the pathognomonic lesions of human CTE are found in the sulci near perivascular regions, regions that are particularly vulnerable to mechanical stress from injury (86).Large animal models like primates, have gyrencephalic brains, increasing the translatability of this model to humans (109, Splicing variants of tau.Tau undergoes alternative splicing involving exons 2, 3, and 10, resulting in six different isoforms of the protein containing the presence or absence of exons containing microtubule-binding domains (R) and N-terminal insertions (N) (87).The ratio of the different tau isoforms varies in different regions of the brain and during different stages of development (88).Tau isoform composition also varies in tauopathic diseases, such as CTE and Alzheimer's disease, which may impact aggregation and pathology (89).Image created using BioRender.(109,110).Therefore, the use of non-mammalian models may be an attractive substitute for large animal and rodent models. Petromyzon marinus (sea lampreys) Harnessing lower vertebrates for the study of proteins implicated in human disease enables mechanistic studies in a large, identifiable neuron population while extending studies from invertebrates.The robust neuroregenerative capabilities and functional recuperation exhibited by the CNS in lower-order vertebrates make them an attractive experimental model for investigating the role and behavior of abnormal tau following TBI and neurodegeneration.In particular, the biochemical properties of tau have been studied in the lamprey.Hall et al. (111) utilized sea lamprey anterior bulbar cells (ABC) to demonstrate that chronic, full-length human tau overexpression resulted in fibrillary tangles reminiscent of the tau tangles present in neurodegenerative disease, particularly AD (111).They also demonstrated that proprietary small molecules prevented neurodegeneration in cells containing an accumulation of tau filaments (112) while Honson et al. ( 113) provided evidence for a small molecule inhibitor, N3 (a benzothiazole derivative) to arrest tau aggregate formation in sea lamprey neurons (113).Sea lampreys have also been used to study the movement and deposition of tau and its subsequent role in neurodegeneration.One study demonstrated that mutated tau, particularly the P301L form, migrates in a transneuronal manner, while wild-type tau does not (114,115).Another study showed that extracellular human tau moves both synaptically and non-synaptically (116).Additionally, exonic mutations in human tau accelerated degeneration in lamprey ABCs (117).Interestingly, Le et al. (116) noted similarities between tau patterns in lampreys and tau patterns in humans.Over time, extracellular tau deposits in the lamprey mirrored the deposits indicative of human CTE and even resembled the perivascular halos that are pathognomonic of CTE in humans.(116).When taken together, these studies suggest that lampreys serve as an excellent model of some features of neurodegenerative disease, highlight its use as a rapid screening tool, and may be used to further investigate the mechanisms driving the formation of aberrant tau after TBI. Danio rerio (zebrafish) The zebrafish (Danio rerio) proteome exhibits a notable degree of homology with the human and they have similar anatomical structures and functions of the brain, thereby making it a suitable organism for investigating TBIs.There are several models that recapitulate closedheaded TBI in the zebrafish.McCutcheon et al. (118) employed a targeted, pulsed, high-intensity focused ultrasound (pHIFU) to induce damage to the brain by mechanical force (118).Zebrafish injured by pHIFU demonstrated increased expression of β-APP and β-III tubulin, a microtubule protein (118), suggesting that this model may be used to investigate the pathophysiology of TBI.Additionally, a non-invasive mild TBI model was developed in adult zebrafish using a laser to induce damage to neural tissue (119).Laser-induced damage to the brain resulted in dilated vessels, hemorrhage and edema, 1 day post-injury (119).These signs suggest that the laser-induced brain injury reproduces features of the pathophysiology associated with mild TBI (119).In the most recent study of zebrafish TBI models, Gill et al. (120) developed a method to model blast injury without the use of anesthetics by dropping a weight onto a fluid-filled plunger (120,121).This method of injury produced cell death, hemorrhage, blood flow abnormalities, and tauopathy, consistent with TBI (121).The homology between zebrafish and humans in terms of TBI pathophysiology make zebrafish an excellent tool to advance our understanding of TBI and its underlying mechanisms. In addition to modeling TBI pathophysiology, zebrafish have been used as a biosensor to investigate tauopathies.One study conducted a high-throughput screen for herbal extracts to reduce neuronal death initiated by aberrant tau.Of the 400 herbal extracts screened in the zebrafish, 45 were identified as having the potential to reduce tau-induced neuronal death (122).Additionally, Lopez et al. (123) investigated the clearance kinetics of an aberrant tau protein variant, p.A152T, and applied both pharmacological and genetic approaches to reduce the burden of p.A152T tau in zebrafish by upregulating autophagy (123).Reduction of p.A152T by upregulation of autophagy ameliorated morphological abnormalities and reduced hyperphosphorylated tau (123).In another study, Cosacak et al. ( 124) created a transgenic zebrafish to explore the aberrant human tau variant, P301L.P301L generates neurofibrillary tangles in mammalian models of tauopathies (125,126), though did not produce neurofibrillary tangles in the zebrafish nor exacerbate Aβ42 toxicity (124), suggesting a protective mechanism in the zebrafish that may be exploited.These therapies aimed at reducing aberrant tau burden may serve as a strategy for treating tauopathies. While aquatic vertebrates are useful models to study neurodegeneration and the mechanisms driving aberrant formation of tau, their inherent ability to regenerate neurons (127) following injury may confound the consequences of the secondary injury.However, understanding regeneration may provide key insights into pathways provoked by TBI and may lead to the development of therapeutics to mitigate the effects or potentially reverse TBI-induced pathology.Furthermore, these models provide a way to screen various interventions at relatively low cost while examining histological and biochemical correlates of TBI. Invertebrate model organisms 5.1 Caenorhabditis elegans (roundworms) In C. elegans, researchers have utilized blast injuries to model mild TBI.However, the existing blast methods have yielded heterogeneous outcomes.Angstman et al. developed a shock wave injury model that produces a consistent and quantifiable injury, but its predictive ability for individual outcomes remains limited.However, in 2019, Miansari et al. (128) demonstrated that high-frequency surface acoustic waves (SAW) in a C. elegans model of blast-induced mild TBI, confined within a narrow range of the substrate surface, induced mobility and shortterm memory deficits in a more homogenous manner than previous models in the literature, suggesting that SAW may be an improved earlystage model for human TBI.Additionally, Angstman et (129,130).This compelling evidence further strengthens the suitability of C. elegans as a viable non-mammalian model for TBI and a suitable alternative model organism for mitigating ethical concerns when using mammalian models to explore repetitive trauma.Beyond inducing injuries, researchers have employed C. elegans as a model organism to investigate the effects of TBI-modified tau.Brain homogenates from mice with chronic TBI and or intracerebral inoculation of tau TBI , a form of tau that aggregates after chronic TBI, impaired motility, and neuromuscular synaptic transmission in C. elegans (131, 132).Surprisingly, when naive mice were intracerebrally inoculated with tau TBI , a prion-like spread of tau TBI occurred, resulting in memory deficits and synaptic toxicity (131, 132).Moreover, Diomede et al. established the therapeutic potential of Aβ1-6A2V(D), an all-Disomer synthetic peptide, to promote tau degradation by proteases and impede tau aggregation in a C. elegans model (114).Additionally, the average lifespan of C. elegans ranges from 9 to 23 days depending on the rearing temperature (133), highlighting the ability to track tau aggregation through the entire lifespan of the organism.Collectively, these studies demonstrate the potential of using C. elegans as biosensors to investigate and manipulate the biochemical properties of tau and its interactions with potential therapeutic peptides in a faster and less complex manner than mammalian models.Overall, C. elegans is a valuable alternative to mammalian models for studying neurodegenerative diseases, providing an array of genetic tools and simple mechanistic studies to understand tau in the context of dysfunction, such as TBI. Drosophila melanogaster (fruit flies) Drosophila melanogaster is an excellent model system to study the longitudinal effects of rTBI and its effect on endogenous tau protein.Using Drosophila, it is possible to study post-injury behavior, while interrogating histological features of injury and correlating these responses to proteomic and transcriptomic changes (e.g., mass spectroscopy and RNA seq) responses.While tau has been linked to neurodegeneration and neurotoxicity, there is only a rudimentary understanding of the upstream biochemical mediators of tau in the context of rTBI and CTE.Several studies have expressed wild-type and mutant human tau proteins in Drosophila melanogaster to model AD, although, hTau transgene expression in Drosophila is not an ideal functional model, in part because of poor binding to Drosophila microtubules.This, as well as differences in phosphorylation sites and uncertainty about whether hTau protein models endogenous NFT formation, limits the applicability of this model.Drosophila tau (dTau) contains five putative microtubule-binding repeats and lacks the N-terminal repeats seen in human tau, despite sharing 66% homology with hTau protein (134,135).At least six CTE-associated phosphorylation sites are observed in human tau, and four of those, Thr 231 , Ser 202 , Thr 205 , Ser 199 are conserved in dTau, as Thr 151 , Ser 106 , Thr 123 , Ser 103 , respectively.While many studies express hTau in Drosophila to model tauopathic diseases, some have shown that dTau can confer the same neurotoxic and neurodegenerative effects as hTau (135).Thus, by investigating dTau in Drosophila, its endogenous properties can be readily understood and may represent an informative window into TBI-induced tauopathy (CTE-like) pathogenesis.Overexpression of dTau in Drosophila leads to neurotoxicity and eventual neurodegeneration similar to that observed with overexpression of hTau in Drosophila (135), though these overexpression models have not been studied in terms of the upstream and downstream mediators of tau-associated neurotoxicity.Neither dTau nor exogenous expression of hTau has been examined with respect to their roles in CTE in Drosophila, perhaps due to the lack of sulci and perivascular regions in Drosophila that are associated with human CTE pathogenesis.Instead, characterizing and exploring the vulnerable regions in the Drosophila brain in the context of repetitive TBI may help to establish a model of chronic traumatic encephalopathy, and evaluating endogenous dTau will provide valuable insights on the progression of tauopathy dysfunction after injury. The current Drosophila models of head-specific TBI study acute changes (136,137), while current models of rTBI are not head-specific.Traditional methods of TBI in flies utilize high-impact devices (138) or Omni-Bead Ruptor homogenizing platforms (139) that may be used for high-throughput injuries.The high-impact devices utilize a spring attached to a fly vial that, when stretched and released, generates an impact against a tabletop while the Omni-Bead Ruptor freely shakes a small screw cap tube in which the flies are placed.While these methods generate high-throughput injuries, the uncontrolled, full-body injury potentially results in confounding effects on climbing and walking assays, two paradigms commonly used to assess behavioral sequelae after TBI (136).To overcome this limitation, Sun and Chen developed a head-specific model that uses carbon dioxide to propel an impactor against the head (137).They explored walking distance and lifespan as potential markers of injury resulting from repeated head impacts in Drosophila (137).Despite this advancement, the use of a manual FlyBuddy switch system introduces variability in the timing of the impacts and the duration in which carbon dioxide propels the impactor.In addition, the underlying neurobiological changes that occur after multiple injuries have not been explored.The Bonini lab developed a fly impactor model using a piezoelectric striker to compress the fly head against a metal fly collar that fixes the head in place and demonstrated acute injury markers that progressed with increasing severity, establishing a more realistic single TBI model in Drosophila (136,140). Discussion and conclusion In this review, we discuss the use of non-vertebrate animal models and vertebrate aquatic animals to explore the mechanisms driving tauopathies and other changes post-TBI.We highlight that these model organisms offer several advantages to research and will allow for cost-effective, rapid, discovery-based approaches and potential high-throughput screens for therapeutics, in addition to reviewing differences in behavioral and physiological response to acute and repeated brain injuries.It is important to note that while acute and repeated injuries differ in effects on glucose metabolism and even in temporal patterns of tau expression, several studies have shown that acute injuries may result in non-transient tau expression (141,142).In rodents, exposure to a blast injury at 10.8 psi once per day for 3 days resulted in an accumulation of pathological tau aggregates in the brain (35).CTE pathology is also observed in some American football players with multiple concussive and subconcussive blows to the head (143).CTE pathology was also found in military personnel who underwent a single IED blast injury (143), and a single moderate to severe brain injury resulted in tauopathic lesions in the brain (85).These studies suggest that the total mechanical force accumulated by the brain over time may represent one factor influencing the development of CTE, independent of the number of brain injuries.Severity of the injury, an indirect measure of the mechanical force sustained from a TBI, may also play a role in the development of CTE, with evidence of neuroinflammation in the brain 17 years after the initial injury (50).Several studies demonstrate that chronic neuroinflammation following an acute injury may serve as a contributing factor to neurodegeneration.Given the important role of neuroinflammation in TBI previously discussed and the recent studies that have revealed the powerful effects of microglial depletion strategies on modulating neuroinflammation after TBI, it will be critical to fully characterize the acute and chronic neuroinflammatory responses in a model organism that allows for rapid longitudinal and genetic studies (144-148).The emerging key role of age-related microglial phenotypes, recently described by (145), in this regard, and their link to neurodegeneration could represent a perfect opportunity for exploration in TBI models in Drosophila, given the relative ease and efficiency to study long-term effects and outcomes. FIGURE 1 FIGURE 1 differentiate between different tauopathic neurodegenerative diseases.In AD, NFTs arise in the brainstem and entorhinal cortex before spreading to the medial temporal lobe and evenly distributing in the neocortex layers III and V (65, 103, 104).In contrast, CTE develops in the deep sulci of the superficial neocortical layers II and III of the cerebral cortex, focally and perivascularly (86).The spread continues irregularly to the neocortex, medial temporal lobe, diencephalon, basal ganglia, and brainstem (65, 105).Though the pathologic spread of tau differs, AD and CTE share at least two of the same tau phosphorylation sites including Thr 231 and Ser 199 which have been implicated in neurotoxicity and neuronal dysfunction (65).These common phosphorylation sites, in addition to patterns of deposition, allow AD models of tauopathy to inform CTE tauopathy studies. FIGURE 2 FIGURE 2 Like rodent models, primate have drawbacks.However, primate models are limited by cost, lack of established post-TBI functional assays, are technically difficult, and raise ethical concerns al. have shown that their blast-related model of mild TBI in C. elegans recapitulates
2024-03-22T15:53:37.898Z
2024-03-19T00:00:00.000
{ "year": 2024, "sha1": "d4dc24b7a8134d64773aa3e1b59d56c12b784323", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/neurology/articles/10.3389/fneur.2024.1378620/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9540090bd2d918f1fbe06104253e02f0e3201107", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
9371658
pes2o/s2orc
v3-fos-license
Spinal Epidural Abscess with Pyogenic Arthritis of Facet Joint Treated with Antibiotic-Bone Cement Beads - A Case Report - Most epidural abscesses are a secondary lesion of pyogenic spondylodiscitis. An epidural abscess associated with pyogenic arthritis of the facet joint is quite rare. To the best of our knowledge, there is no report of the use of antibiotic-cement beads in the surgical treatment of an epidural abscess. This paper reports a 63-year-old male who sustained a 1-week history of radiating pain to both lower extremities combined with lower back pain. MRI revealed space-occupying lesions, which were located in both sides of the anterior epidural space of L4, and CT scans showed irregular widening and bony erosion of the facet joints of L4-5. A staphylococcal infection was identified after a posterior decompression and an open drainage. Antibiotic- bone cement beads were used as a local controller of the infection and as a spacer or an indicator for the second operation. An intravenous injection of anti-staphylococcal antibiotics resolved the back pain and radicular pain and normalized the laboratory findings. We point out not only the association of an epidural abscess with facet joint infection, but also the possible indication of antibiotic-bone cement beads in the treatment of epidural abscesses. Introduction An epidural abscess is an uncommon disease that may cause an irreversible paralysis or death. Pyogenic arthritis of a lumbar facet joint is quite a rare cause of lower back pain. Fewer than ten cases of epidural abscesses associated with pyogenic arthritis of a lumbar facet joint were reported in the literature. To the best of our knowledge, there is no report of the use of antibiotic-cement beads in the surgical treatment of an epidural abscess. This report presents an epidural abscess complicated with pyogenic arthritis of a lumbar facet joint as an unusual cause of back pain and the use of antibiotic-cement beads as a possible supplement to the surgical treatment of this condition. Case Report A 63-year-old male was transferred from a local clinic, complaining of radiating pain to both lower extremities lasting for one week. The tender points were found on the midline of back and both paravertebral muscles, but body temperature was normal. The range of motion of his spine was severely limited, and there was a decrease of the straight leg raising tests (60/60). The motor powers of the extensor hallucis longus were decreased bilaterally and ankle jerks were absent bilaterally. He denied the history of acupuncture or local injections in the back. Initial investigations revealed that the white blood cell count was 9,300/mL, the erythrocyte sedimentation rate was 118 mm/hr and C-reactive protein was 12.5 mg/dL. Serum GOT/GPT levels were 59/87 IU/L, but the sonogram of the liver was normal. The electromyography and nerve conduc-tion study revealed radiculopathies of L5 and S1, bilaterally. Plain radiographs of the L-spine showed no specific abnormality. MRI revealed two space-occupying lesions, which were located on both sides of the anterior epidural space, and a signal change of the dural sac at the level of L4 and L5 (Fig. 1). Computed tomography (CT) axial scans revealed the same space-occupying lesion, adhesion and thickening of the roots of L4 and L5, and showed irregular widening and bony erosion of the facet joints of L4-5 (Fig. 2). Technetium-99 m bone scan demonstrated increased uptake at the L4 and L5 levels (Fig. 3). On the 4th day of admission, he had chills with temperatures to 38.9℃ with increasing back and radicular pain. The white blood cell count was 11,200/mL, erythrocyte sedimentation rate (ESR) was 123 mm/hr and C-reactive protein (CRP) was 14.1 mg/dL. We decided on surgical decompression after parenteral antibiotics (first generation cephalosporin & aminoglycoside) for 2 days. Under a general endotracheal anesthesia, a total laminectomy and bilateral medial facetectomies of L4 and partial laminectomies of L3 and L5 were performed. In the surgical field, the L4-5 facet joints were destroyed and the drained pus from the facet joints was located on both sides of the anterior epidural space. A severe adhesion between the dura and the annulus fibrosis, and a thickening and adhesion of the L4 and L5 roots were also noted. A histological examination from frozen biopsy revealed an inflammation. The drainage of the pus and the removal of the inflamed granulation tissues and surrounding fibrous tissues were performed. In fear of incomplete removal of the infected tissues and for local control of the infection, we decided to use antibiotic-cement beads. Vancomycin 4 gm and bone cement 40 gm were mixed, and 7 rows of beads were inserted on the posterior aspect of L3 to S1 (Fig. 4). We planned the fusion with or without instrumentation after removal of the antibiotic-cement beads two weeks later. The radiating pain was immediately relieved after the operation. First generation cephalosporin and aminoglycoside were administered intravenously for two weeks. The blood and intraoperative pus cultures demonstrated methycillin-sensitive Staphylococcus aureus. During the second surgery, some pus that was admixed with chocolate colored liquid material was noted, so we abandoned the use of instrumentation. A posterolateral fusion with autogenous iliac bone at the level of L3 to L5 was performed. Second generation cephalosporin and quinolon were then administered intravenously for 3 weeks. Oral quinolon was then prescribed for four weeks until the ESR was normal. The patient was on bed rest postoperative for 3 weeks, and then was ambulated with thoracolumbosacral orthosis for 3 months. At three-months after surgery, solid fusion was achieved (Fig. 5). And at 2 years follow-up, the patient returned to normal physical activity without low back pain. Discussion Epidural abscesses are usually a complication of pyogenic spondylodiscitis. Several reports of pyogenic arthritis of lumbar facet joints have been reported 1-5 . Fewer than 10 cases of epidural abscesses associated with pyogenic arthritis of the facet joints have been reported 6-9 . Epidural abscess formation was complicated in 25% of pyogenic facet joint infections and 38% of the complicated cases developed severe neurologic deficit 8 . In our case, the patient with radiculopathy had resolution of radicular symptom after surgery. As in our case, WBC count was elevated approximately 50% of the time, and ESR and CRP were uniformly elevat-ed. Staphylococcus aureus was the most common organism, accounting for 80% of reported cases 8 . Evidence of a septic facet joint infection or, an epidural abscess on plain radiographs, in the form of bony changes consistent with osteomyelitis, is not visible for 2~3 weeks post onset 1 . A technetium-99 m bone scan is 100% sensitive in detecting facet joint infection as early as 3 days after symptom onset 6 . But gallium scans or indium 111-labeled leukocyte scans are more specific than technetium scans in differentiating infection from other diseases. CT can be used to localize the infection and to visualize bone and soft tissue involvement 6 . MRI is the imaging modality of choice in its earliest stages and in delineating the extent of soft tissue involvement, including abscess formation 8 . An epidural abscess without neurologic deficit can be treated with percutaneous drainage and parenteral antibiotics, but an epidural abscess with severe neurologic deficit is a surgical emergency and should be treated with direct decompression of the epidural space. Antibiotic impregnated bone cement has been introduced in the prevention or the treatment of infected artificial joints and infected long bones 10 . We used antibiotic-bone cement beads for the treatment of spinal epidural abscesses as a local controller of the infection and as a spacer or an indicator for the secondary fusion operation. Until now, the use of antibiotic-bone cement beads in the treatment of epidural abscesses has not been established, but we hope this modality to be beneficial in the treatment of various spinal infections.
2014-10-01T00:00:00.000Z
2007-06-01T00:00:00.000
{ "year": 2007, "sha1": "663b58271829d685aeeecde97d95d4b94e18ce6d", "oa_license": "CCBYNC", "oa_url": "http://www.asianspinejournal.org/upload/pdf/asj-1-61.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "663b58271829d685aeeecde97d95d4b94e18ce6d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234041627
pes2o/s2orc
v3-fos-license
Current Situation of TCM Treatment of Schizophrenia Schizophrenia is a common clinical disease in the department of psychiatry. Traditional Chinese medicine treatment of schizophrenia has a long history, rich experience, small side effects, to a certain extent can alleviate the condition of patients with schizophrenia. This article reviews the current situation of TCM treatment of schizophrenia in recent years from three aspects: TCM treatment, acupuncture treatment and catgut embedding treatment. Understanding of schizophrenia in Chinese medicine name of madness is found in the Huangdi Neijing, which has a systematic description of its symptoms, etiology, pathogenesis and treatment. For example, in "Su Wen·Pulse to Subtle", clothes are not collected, words good and evil, do not avoid friends and relatives, this god's chaos. Describes the symptoms of the patient at the time of onset; in terms of the pathogenesis of madness, "all manias belong to fire" in the book "Su Wen·On the Importance of Truth" describes that the evil fire and the heart can cause the disease. "Yang Qi is on the top, Yin Qi is on the bottom, the bottom is empty and the top is solid, so it is called crazy disease" in Su Wen·Mai Jie, which describes that the imbalance of Yin and Yang can cause disease. "Su Wen·Tiao Jing Lun" in "more blood, then anger, lack of laughter", describes the disorder of qi and blood can cause disease. In "The Madness of L i n g s h u " , " T h e When the disease is acquired in the womb, the mother is somewhat shocked." In Su Wen·Xuanming's Five Qi Theory, "The five evils are disordered; when the evils enter the Yang, they are mad; when they beat the Yang, they are epileptic.", "Su Wen·On the Alternation of Qi", "The fire is too hot in the New Year, the heat is popular illness and delirium." and so on respectively from the sentiment, the innate heredity, the exogenous pathogenic factors, the seasonal and so on discussed the madness disease etiology. "Su Wen· Disease can discuss" in "How cure ? Qi Bo said: take actually already make it drink pig iron." and so on discussed the treatment of madness [2] . Traditional Chinese medicine treatment Traditional Chinese medicine has certain effect in the treatment of schizophrenia. Chenxia Liu, et al. [3] Acupuncture treatment Guimei Zhao [6] reported Hamilton anxiety scale (HAMA) were significantly reduced, and the differences were significant at different time points (P < 0.05). It shows that it can improve mental symptoms and relieve anxiety. Wei Liang [7] reported that acupuncture has a good effect on patients with chronic schizophrenia. A total of 58 patients were se- shows that acupuncture can improve oxygen free radical metabolism, harsh immune resistance, and thus improve the symptoms of patients. Jihong Wu [8] al. [9] believed that acupuncture could improve the cogni- Acupoint catgut embedding therapy Haifang Zhu, et al. [10] randomly divided 90 schizo- effect is good. Yazhi Lv [11] used "embedding thread and adjusting mind" to treat schizophrenia at Jiaji points of Huatuo, which is located at the 1st to 7th thoracic vertebrae, and the 4th and 5th lumbar vertebrae to the 1st sacral vertebra. This method proved to be satisified. Shian Li, et al. [12] alone. The results showed that the score of brief psychiatric symptom evaluation was lower than the control group (P < 0.05), and with significant effect, the dopamine inhibition ability was greatly improved. In concussion, the treatment effect of the embedding group was better than the control group (P < 0.05). Conclusion To sum up, a large number of clinical practice results show that TCM has achieved satisfactory results in the treatment of schizophrenia, and TCM treatment combined with low-dose antipsychotic medication can achieve satisfactory efficacy, which is superior to western medicine alone, and can reduce the amount of antipsychotic medication and side effects to a certain extent. Moreover, TCM treatment is perfect at starting from the whole, focusing on the main contradictions of the onset, and strictly observing the pathogenesis. Both in compatibility of traditional Chinese medicine and acupuncture point should be treated based on syndrome differentiation to make the treatment more focused, so that the key points of the disease can be grasped without neglecting the overall situation. I believed that the position of TCM in the treatment of schizophrenia will be paid more and more attention.
2021-05-10T00:03:20.423Z
2021-02-02T00:00:00.000
{ "year": 2021, "sha1": "0b968216825d1a9bef01d5409e5b6af9640c5a8b", "oa_license": "CCBYNC", "oa_url": "http://aem.usp-pl.com/index.php/aem/article/download/177/171", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c1b6176aa49c5f66d00fdd3159af521fe2c3bee4", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252221827
pes2o/s2orc
v3-fos-license
Keep calm and transcribe on: chromatin changes with age, but transcription can learn to live with it Assessing age‐related tissue dysfunction represents an emerging field and involves analyses that are far from trivial, often requiring the integration of several large‐scale (“omic”) techniques. In their recent work, Tessarz and colleagues (Bozukova et al, 2022) characterize changes in the transcriptional machinery during aging in mice and report some surprising findings. T issue function declines with ageperhaps one of the earliest and simplest biological observations. But over the last two decades, technological advances and in-depth analyses have triggered the realization that aging is much like any other biological process: amenable to mechanistic understanding and perhaps even manipulation. At the molecular level, aging manifests as decreased coherence in our biological clockwork (L opez-Ot ın et al, 2013). Moreover, an ever-growing list of age-associated alterations in signaling pathways and gene expression continues to be characterized (Tabula Muris Consortium, 2020). An important nexus of all these inputs and outputs is the transcriptional machinery itself. A complex sequence of events precedes successful gene transcription. In brief, chromatin is first activated and opened, rendering it receptive to transcription. This is followed by the recruitment and assembly of RNA Pol II on the target gene and, lastly, RNA Pol II is released into productive elongation, a process that is intricately regulated. In a tour de force, Bozukova et al assess each of these stages for all transcribed genes, in liver tissue from young and old mice (Bozukova et al, 2022). They then integrate the genome-wide information, assembling a comprehensive overview of age-associated changes. Altogether, this constitutes the first systematic whole organ study that combines chromatin accessibility, analysis of core transcriptional machinery, and transcriptomic output, per gene, and for all genes. As a summary readout of chromatin activation status, the authors first quantified chromatin accessibility across the genome, using the well-established technique ATACseq. Interestingly, the old liver displayed a global increase in accessibility to gene promoters and enhancers. Typically, more open chromatin is associated with an increased potential in target gene expression. However, the gene expression differences in the liver of young vs old mice were moderate and did not correlate with the global increase in chromatin accessibility at promoters. This counterintuitive observation suggested that the function of RNA Pol II transcriptional machinery may change during aging. Thus, Bozukova et al next performed a detailed analysis of RNA Pol II function. Compared with histones and most chromatin regulators, RNA Pol II is highly dynamic and moves along genes. Therefore, an experimental snapshot of the distribution of RNA Pol II along a gene is complex and reveals underlying checkpoints in the control of transcriptional elongation (Roeder, 2019; Schier & Taatjes, 2020). Notably, RNA Pol II assembles at the promoter start of transcription (around position +1 bp), and typically arrests transiently during early elongation at a "pause site" (around positions +20 to +60 bp) where control mechanisms can eject the complex or license it for full elongation (Chen et al, 2018). Bozukova et al used a combination of techniques to compare RNA Pol II function in old vs young mouse liver and to precisely map the abundance of RNA Pol II with single base-pair resolution along every gene. Crucially, they observed a striking and global reduction in paused RNA Pol II in the old liver samples. To explore this, the authors examined some known regulators of the pausing process. Of note, the accessory factor SPT4, which is important for licensing into elongation, was depleted from promoters across the genome in old livers. This provides a potential explanation for the apparent paradox whereby aging is associated with more accessible chromatin at promoters, without the expected increase in transcriptional output. The authors conclude that the increased chromatin accessibility to RNA Pol II is compensated by a reduced efficiency in elongation. Specifically, the potential increase in RNA Pol II recruitment and initiation is offset by a destabilized pausing complex and ejection of RNA Pol II from the gene. Ultimately, it is tempting to speculate that a compensation process is at work, where increased chromatin accessibility is mechanistically linked to a deficient promoter pausing of RNA Pol II and reduced engagement into full elongation, thereby preserving normal gene expression levels (Fig 1). This study raises fundamental questions that remain to be further explored: how does chromatin accessibility increase, while RNA Pol II pausing decreases, with age? Are these processes causally linked, or could they be independently generated phenomena? If they are linked, which process is the initiator? Future studies can continue to probe this mechanistically since the pausing process has been previously studied in detail, and many more regulators are known (Chen et al, 2018; Schier & Taatjes, 2020). For analyzing regulators and global trends in pausing and chromatin accessibility, future studies may routinely include spike-ins to monitor for global changes in transcription output. Notably, a recent analysis of transcription at the single-cell level during aging has revealed a global reduction in RNA levels, including hepatocytes (P alovics et al, 2022). Other levels of complexity can emerge upon aging such as transcriptional dispersion or heterogeneity between cells of the same type (Nikopoulou et al, 2019). This could also include cell-to-cell heterogeneity in chromatin accessibility and RNA Pol II pausing. It remains to be determined if the transcriptional compensation model, described by Bozukova et al in bulk tissue, comes at the price of increased cellto-cell heterogeneity. In summary, the current work puts dysregulation of chromatin accessibility and transcriptional pausing in the focus of aging research. MS is a shareholder of Senolytic Therapeutics, Life Biosciences, Rejuveron Senescence Therapeutics and Altos Labs, and is advisor of Rejuveron Senescence Therapeutics and Altos Labs. The funders had no role in the study design, data collection and analysis, decision to publish, or manuscript preparation. Figure 1. Age-related changes in transcription in the mouse liver. OLD OLD OLD (A) Integrative genome-wide analyses of chromatin accessibility and transcription in young vs. old mouse liver samples indicated increased chromatin accessibility in old animals, which is, however, not accompanied by the expected increase in transcriptional output. (B) The model proposed by Bozukova et al suggests that increased chromatin accessibility to RNA Pol II is compensated by a reduced efficiency in elongation due to deficient promoter pausing of RNA Pol II and ultimately premature termination of transcription.
2022-09-15T06:16:43.234Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "6c30e23e3ad0c73ff6251bfe14597cf272503af4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "08090441f026bc5d3ab41b2a18309dc13640dc7b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221760734
pes2o/s2orc
v3-fos-license
Pedological Characterisation of Soils of University Farm, Federal University of Kashere, Gombe State, Nigeria Original Research Article Pedological characterization of soils is key for land resource planning and development of soil management interventions for improving agricultural productivity. A study was conducted in University Farm to examine soil morphological, physical and chemical attributes for land use planning and determining area specific soil management strategies. A detailed soil survey was conducted using a free survey method. Three profile pits were dug at the upper, middle and lower slope positions. Hoe and hand trowel was used in collecting soil samples from identified genetic horizons. The collected soil samples were then air-dried, crushed gently and stored in well labeled polythene bags. The processed soil samples were then taken to the laboratory for analysis following standard procedure to determine the physical and chemical properties of each soil sample. The results indicated that the soils are deep to very deep and most of the soils are predominantly weak-red to pale-red in colour (7.5R 4/3 – 10R 7/3), while soil structure is observed to be dominantly sub-angular blocky in all the profiles. The dry, moist and wet consistencies across slope were predominantly hard soft, friable, non-sticky non plastic, slightly sticky and slightly plastic respectively. The result of the soil particle size distribution indicated that the values of sand, silt and clay ranged from 17.6% to 69.6% (mean=46.26%), 6.40% to 64.4% (mean=39.3%) and 12% to 26% (mean=16.43%) respectively. The soils were generally found to be sandy loamy to silty-loamy in texture, while bulk density value was found to be low ranging from 1.19 to 1.66g/cm 3 . The mean pH ranged from 5.52 – 5.73 and termed to be moderately acidic in reaction. The mean organic carbon, total nitrogen and available phosphorus content obtained in this study ranged from 0.27 – 0.33mg/kg ,0.02 0.03g/kg and 6.82 – 6.94mg/kg respectively. The exchangeable bases (Ca, Mg, K and Na) were generally found to be medium to high. Management practices such as mulching cover cropping, alley cropping, addition of organic and green manures, chemical fertilizers containing especially NP, and K should be adopted for optimal agricultural productivity. INTRODUCTION Agriculture plays a significant role in the economy and livelihoods of people in Nigeria. Improving the productivity of the agriculture sector of the country is greatly dependent on efficient utilization and management of soils [1]. Sustainable utilization of agricultural lands requires a thorough knowledge and inventory of soil resources and hence there is need to characterize soils in farming areas [2]. Soil characterization helps to generate information which is required for land use planning and soil management purposes. Soil surveys are important for soil characterization and classification purposes and aids in the creation of data bases on soil morphology, physical and chemical properties [3]. This information is important for determining agricultural potentials, limitations and possible management options for the soils in a particular area thereby helping in selection of the best agricultural enterprises suitable for that area [4,5]. Irrigation projects can be planned and developed based on information obtained from soil characterization and classification. Area specific soil fertility management strategies, aimed at increasing crop production, can be developed for a particular area using soil survey data instead of using general fertilizer recommendations. Information on soil characterization can be utilized widely by land use planners, agriculture researchers, extension staff, development agents and farmers in order to sustainably increase agriculture production. Agricultural researches play an integral part in the study area and yet there have been no detailed soil survey studies conducted to characterize the soils in this area. There is limited information available for assessing agricultural potential and limitations of the soils in the study area and hence there is need to conduct detailed soil surveys for soil characterization purposes. Therefore the objectives of the study were to characterize the soils of the study area by determining their soil morphology, physical and chemical attributes, thereby generating soils information required for land use planning and soil management strategies in the study area. The study area The field experiment was conducted at University Farm Federal University of Kashere, Gombe State. Its coordinates lie between latitude (10°30 1 N) and longitude (10°52 1 E), on the Northern fringes of the Sudan Savanna belt of Nigeria. It is located at an elevation of 523m above sea level. The geology of the study area is developed on basement complex rocks with adjoining sedimentary rocks formation [6]. The area has a tropical climate, with distinct wet and dry season [7]. The area records about three to four months of rainfall and is concentrated in the months of July, August and September with the average annual rainfall of 951mm per annum [8]. The mean annual temperature ranged from 30 -37°C, while March April and May were observed to be the dry hot months of the year. During the rainy season, the temperature drops considerably due to the cloud cover between July and August as well as during the Harmatttan periods of November to February [8]. Soil sampling and handling Three profile pits, with dimensions 2m long, 1.5m wide, and 1.5m deep, were dug along a toposequence at the study site. Soil samples and soil clods were collected from each identified genetic horizons of the three profile pits, using hoe and hand trowel. The collected Soil samples were then properly labeled in polythene bags and taken to the laboratory for analysis. In the laboratory, each sample was separately air dried ground and passed through a 2mm sieve for laboratory analysis as described by [9]. Particle size analysis was determined using the Bouyoucos hydrometer method, after dispersing the soil samples with 5% Sodium hexametaphosphate. The bulk density was determined by the clod method [10]. Soil pH was determined in 1:1 water ratio using a glass electrode pH metre [11]. Determination of Organic carbon, and Total nitrogen were done by the wet oxidation method and regular micro-kjeldal method respectively. Available phosphorus was determined using the Bray 1 method. The exchangeable cations in the soil samples were determined in the extract of 1N neutral ammonium acetate (NH 4 OA C ) [12]. DATA ANALYSIS The data generated from laboratory analysis were subjected to simple descriptive statistic which include range and mean as described by [13], while means were compared using Coefficient of variation [14]. Morphological properties of soils of study area Soil depths varied across the toposequence, with profile depth across the slope ranged from 148 to 200cm. The profile at the lower slope recorded the shallower depth, but generally the soils were found to be very deep [15][16][17], all reported deep to very deep soil depth in their various studies. The depth of all the soil profiles will permit crop roots proliferation and elongation since the water table is low enough not to constitute an obstacle to root development. The soils in the surface horizons ranged from brown (10YR), red (7.5R 4/3) and red (10R 7/3) across the toposequence, while the corresponding subsurface colour were found to be predominantly brown (10YR) to red (10R 7/3) in colour. Hydromorphic mottling was also observed in only the subsurface horizons and is majorly few and faint. The kind, amount and distribution of organic matter, various mineral constituents, mainly iron compounds and or stagnant water table cause soils to appear in different colours [18,19]. The surface horizon is predominantly found to be Sandy loam in texture, while the subsurface horizons were dominated by Sandy loam and Silty loam textures. The texture of these soils reflected the parent rocks from which they are formed [20]. Several authors linked soil texture to the nature of parent materials from which the soils were derived and also to the rate and nature of some weathering processes [21]. The soil structure is dominantly sub-angular blocky ranging from weak to moderate in grade across the profiles. These confirm earlier findings of [22] who reported both weak to moderate subangular to angular soil structure in their various studies. The dry, moist and wet consistencies across slope were predominantly hard (H) to soft (S), friable (F) and non-sticky non plastic (nsnp) to slightly sticky and slightly plastic (sssp) across the profiles [23], also reported similar findings in some pedons, while characterizing and classifying soils of Yikalo Subwatershed in Lay Gayint District, Northwestern Highlands of Ethiopia. Generally the increased sticky consistence (wet) with increase in soil depth observed in some profile is a diagnostic of clay lessivation, as reported by [24] for soils developed in sedimentary basins [22], also reported increase in stickness and hardness down the profile. In all the profiles studied, few and fine roots were found to predominate in both surface and subsurface horizons. Generally the content of roots decreased as depth increases. Many roots were found in the Ap horizons since it is the zone of active root activities. Horizon boundaries were mostly found to be gradual and wave (gw) in all the studied pedons. Horizonation is ascribed to addition, losses, translocation and transformation of organic matter and colour development, very evident for soils under vegetational condition. Generally horizonation is promoted in the soils by melanization from the humification of organic matter in the A horizon. Physical properties of soils of study area Sand fractions dominated the particle size distributions in most profiles (Upper and Middle slopes) which ranged from 17.6% to 69.6% (mean=46.26%). The particle size distribution showed that the sand content were the highest and the clay content were the lowest for most of the profiles as shown from the result in Table 2. The predominance of Sand particles in arid and semi-arid climates is not uncommon because many of them were formed from aeolian deposits blown from across several thousands of kilometers [25]. The percentage of silt content ranged from 6.40% to 64.4% (mean=39.3%).The highest value of 64.4% was recorded in the lower slope. A notable feature in all the soils studied is their high silt content (Tables 2) [26,27] all reported higher Silt content in their various studies. This high Silt content obtained in this study could be attributed to the nature of parent material and stage of soil development [28]. The clay content ranges from 12% to 26% (mean=16.43%) in all the pedons. The highest value of 30% was recorded at the lower slope [29,30], also reported low values of Clay content in their various studies while working on similar type of soils. The low clay content obtained in this study is attributed to the fact that the parent material of the study area is rich in sand. The result of bulk density ranged from 1.19 to 1.66g/cm 3 (mean=1.4g/cm 3 ) across the toposequence. The values of bulk density obtained in this study are within the range reported in earlier findings by [31], who recorded values of 1.11 to 1.98g/cm 3 , while working on floodplain soils in Southern Guinea Savanna of North Central Nigeria. Also the bulk densities of the studied soils showed an apparent increase with depth, this could be attributed to OC distribution down the profile. However the values obtained in these studies are generally considered to be safe for root penetration because penetration might be hindered in soil having bulk density value >1.75g/cm 3 [32,33]. Donahue et al. [34] pointed out that good plant growth is best at bulk densities below 1.40 g/cm 3 for Clay, and 1.60 g/cm 3 for Sandy soils. The coefficient of variation for sand and silt recorded a high variability >35%, while clay and BD were found to be moderate (Table 2) along the toposequence. This results indicate that toposequence influence the content and distribution of soil physical properties, such as sand and silt. Chemical properties of Soils of the Study Area The mean pH of the studied soils (Table 3) ranges from 5.52-5.73(mean= 5.52) across the profiles indicating that the soils were moderately acidic [15]. The low pH values recorded in this study are similar to those earlier reported by [35,36]. The acidic condition of the soils under study could be attributable to greater oxidation of anions like sulphides and nitrites leading to soil acidification [21]. The mean values of organic carbon content ranges from 0.27-0.33g/kg (Table 3) across the profiles, and was rated low [15]. The organic carbon was also found to decrease down the slope. This finding is in line to earlier findings by [35,37] who obtained low OC content for soils in the Savanna zones of Nigeria. The low level of organic carbon in these soils could be attributed to low organic matter returns and other human factors such as crop residue removal, burning and mineralization. The total nitrogen content mean value across the profiles ranges from 0.02-0.03g/kg (Table 3) and were rated low as per [15] rating scale. Low total nitrogen in soils has been reported by [38,39]. The low level of TN obtained in this study could be attributed to TN been mobile in soils, as a result its losses through various mechanism like ammonia volatilization especially under high temperature that characterize the climate of the region, succeeding denitrification, chemical and microbial fixation, leaching and runoff all results in residual/available N to be poor in soils [30]. The mean values of available phosphorus ranges from 6.82-6.94mg/kg (Table 3) across the profile and were rated medium according to [15]. Such low available P values were earlier reported by [25,40] in their various findings [41] attributed the low value of available phosphorus as recorded in this study to its low content in the parent materials and its propensity to sorption on mineral surface. It could also be due to fixation, as a result of the acidic condition of the soils. Also the coefficient of variation of soil chemical properties (Table 3) along the toposequence, showed that variability in soil pH (2%) and Ap (0.8%) were found to be low, while OC (10%) and TN (20%), showed moderate variability. This finding indicated that toposequence only influences the content and distribution of soil OC and TN. Properties of Soils Exchangeable bases of the Study Area The exchangeable bases (Ca, Mg, Na and K) content in the soil profiles across the toposequence are presented in Table 4. The mean values of exchangeable calcium content ranges from 4.18-10.1cmol (+)/kg across the profiles (Table 4), and were rated medium to high [15]. Also the exchangeable calcium content in this study is the dominant cation on the exchange sites of the studied soils (Table 4). This is in line with earlier findings by several researchers [42,43,39] who reported the preponderance of Ca over other cations. The dominance of Ca over other cations may be due to the existence of calcium bearing parent material [18]. The mean exchangeable magnesium content values ranges from 2.84 to 3.30 cmol(+)/kg across the profiles, with higher values obtained at the upper slope ( Table 4). As per [15] rating scale this values are rated high. Magnesium (Mg) is the second most dominant extractable cation on the exchange complex of the studied profiles. The values of exchangeable Mg content in the soils across the various sampling units and depth ranged from 0.41 to 4.11cmol (+)/kg soil (Table 4), and was rated medium to high [15,44], also encountered high Mg soil content in his assessment of Some Soil Fertility Characteristics of Abakaliki Urban FloodPlains of South-East Nigeria. This seemingly medium to high value of Mg content obtained in this study could be related to the calcareous nature of the parent material [45]. The sodium content of the studied soil range from 0.09-0.15cmol/kg across the profiles (Table 4) was found to be medium to high [15]. Similar values were earlier reported by [46,44,47], also reported sodium content values ranging from 0.14 to 2.34cmol (+)/kg soil, while working on Vertisols [44] attributed this high value of Na to deposition of salts on the soil as the flood water recedes, leaving salt crusts and crystals upon evaporation, while [46] attributed it to the nature of parent material (colluvia and alluvia) and use of low quality water for irrigation. The mean potassium content of the studied soils ranges from 0.28-0.33cmol/kg across the profiles (Table 4) and were found to be medium to high as per [45] rating scale. In this study, the exchangeable K values between sampling units and horizons of the soils (Table 4) ranged from 0.36 to 0.56cmol (+)/kg and were rated high according to [15] rating scale [37], also reported high k values while assessing variation in soil exchangeable bases along toposequences, in Gombe State, Nigeria. This medium to higher available potassium content observed in this study may be attributed to more intense weathering, release of labile K from organic residue and by the application of chemical fertilizers containing K [48]. The value greater than 2cmol (+)/kg of K in soil indicates a fairly good supply and the response to K fertilizer is unlikely [49,50]. The coefficient of variation of soil exchangeable bases (Table 4) along the toposequence, showed that Mg (8%), recorded low variation, while Na (25%) and K (10%) recorded low variability. The variability for Ca (41%) was found to be very high. This is an indication that of all the exchangeable bases only Ca is found to be highly influenced by the geomorphic nature of the study area. CONCLUSIONS Based on the result of the study, the result indicated that most of the soils are predominantly weakred to pale-red in colour (7.5R 4/3 -10R 7/3). Sand dominated the particle size distribution and most of the soils are sandy loam to silt loam in texture. From the soils considered, the structure is dominantly subangular blocky in all the profiles. Most of the profiles had friable moist consistence at the top and slightly hard dry consistence at the lower horizon. Also the soils were observed to be moderately acidic, low in OC, TN and AP, while the exchangeable bases were also found to be generally medium to high. The results further indicated that soil properties, such as Sand, Clay, Silt, TN, Ca and Na are found to be variable and could easily be influenced by differences in physiographic positions. RECOMMENDATIONS In line with the above findings there is need to adopt the appropriate agronomic measures for a sustainable agricultural production, within the study area. Management practices such as mulching cover cropping, alley cropping, addition of organic and green manures, chemical fertilizers containing especially N and P should be adopted. Finally proper and periodic monitoring of the physical and chemical properties of such soils is very necessary, so that appropriate and preventive measures could be embarked upon as and when due, for optimum agricultural productivity.
2020-09-16T14:32:06.236Z
2020-07-11T00:00:00.000
{ "year": 2020, "sha1": "87e603b741003b845f488e09f7efec7b37c5beaf", "oa_license": null, "oa_url": "https://doi.org/10.36347/sjavs.2020.v07i07.002", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "87e603b741003b845f488e09f7efec7b37c5beaf", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
237605101
pes2o/s2orc
v3-fos-license
Virial Expansion of the Electrical Conductivity of Hydrogen Plasmas The low-density limit of the electrical conductivity $\sigma(n,T)$ of hydrogen as the simplest ionic plasma is presented as function of temperature T and mass density n in form of a virial expansion of the resistivity. Quantum statistical methods yield exact values for the lowest virial coefficients which serve as benchmark for analytical approaches to the electrical conductivity as well as for numerical results obtained from density functional theory based molecular dynamics simulations (DFT-MD) or path-integral Monte Carlo (PIMC) simulations. While these simulations are well suited to calculate $\sigma(n,T)$ in a wide range of density and temperature, in particular for the warm dense matter region, they become computationally expensive in the low-density limit, and virial expansions can be utilized to balance this drawback. We present new results of DFT-MD simulations in that regime and discuss the account of electron-electron collisions by comparing with the virial expansion. Besides the equation of state and the optical properties, the direct-current electrical conductivity σ is a fundamental characteristic of plasmas which is relevant in various fields. Examples for technical applications reach from the quenching gas in high-power circuit breakers [1] which acts as an efficient dielectric medium up to fusion plasmas produced via magnetic [2] or inertial confinement [3]. The electrical conductivity is indispensible for the verification of the insulator-to-metal transition in warm dense hydrogen [4]. In geophysics, the electrical conductivity determines the properties of the outer liquid core and of the ionosphere, i.e., the entire magnetic field of Earth from the dynamo region [5] up to the magnetosphere [6]. Similarly, the electrical conductivity in the convection zone of giant planets [7], brown dwarfs [8], and stars [9] determines the action of the dynamo that produces their magnetic field. The investigation of the electrical conductivity of charged particle systems is, therefore, an emerging field of quantum statistics. In this work we provide exact benchmarks for this fundamental transport property. Theoretical approaches to calculate the electrical conductivity of plasmas have been performed first within kinetic theory [10]. In a seminal paper [11], Spitzer and Härm determined σ of the fully ionized plasma solving a Fokker-Planck equation. However, to calculate σ(n, T ) in a wide region of temperature T and mass density n, a quantum statistical many-particle theory is needed which * Electronic address: gerd.roepke@uni-rostock.de † Electronic address: maximilian.schoerner@uni-rostock.de ‡ Electronic address: ronald.redmer@uni-rostock.de § Electronic address: mandy.bethkenhagen@ens-lyon.fr describes screening, correlations, and degeneracy effects in a systematic approach. In a very general way, according to the fluctuation-dissipation theorem, the conductivity is expressed in terms of equilibrium correlation functions. Kubo's fundamental approach [12] relates the electrical conductivity to the current-current correlation function. For the relation between generalized linear response theory [13][14][15] and kinetic theory, see [16] and references therein. The evaluation of the corresponding equilibrium correlation functions can be performed by using different methods: (i) Analytical expressions are derived, e.g., by using thermodynamic Green's functions. Perturbation theory allows partial summations using diagram techniques which leads to sound results in a wide range of T and n. However, as characteristic for perturbative approaches, exact results can be found only in some limiting cases. (ii) This drawback is removed by numerical ab initio simulations of the correlation functions applicable for arbitrary interaction strength and degeneracy. Using density functional theory (DFT) for the electron system and molecular dynamics (MD) for the ion system, see [12,[17][18][19][20], single electron states are calculated solving the Kohn-Sham equations for a given configuration of ions. The total energy is given by the kinetic energy of a non-interacting reference system, the classical electronelectron interaction, and an exchange-correlation energy which contains all unknown contributions in certain approximation. One of the shortcomings of this approach is that the many-particle interaction is replaced by this mean-field potential. (iii) In principle, an exact evaluation of the equilibrium correlation functions is possible by using path-integral Monte Carlo (PIMC) simulations, see [21][22][23] and references therein. The shortcomings of this approach are the rather small number of particles (few tens), the sign problem for fermions, and the computational challenges to calculate path integrals accurately. These approaches and other closely related methods have been used to calculate σ(n, T ) in a wide parameter range, and numerous results have been published, for a recent review see Ref. [24]. Also recently, a comparative study [25] considering different approaches has been published which revealed large differences of calculated conductivities. In the present study, we demonstrate that the virial expansion of the inverse conductivity serves as an exact benchmark for theoretical approaches so that the accuracy and consistency of results for the conductivity [25] can be checked. In particular, we apply this framework to analytical approaches, DFT-MD results, and experimental data for hydrogen, which was chosen for simplicity. In the course of this discussion, we present new DFT-MD data to extend the previously available conductivity data [27,38] in the density-temperature region of interest. The virial expansion of ρ = 1/σ suggested in this work is a prerequisite to work out interpolation formulas for the conductivity. It can be used in a wide range of T and n; analogous to the Gell-Mann-Brueckner result for the virial expansion of the plasma equation of state, see [26]. Finally, the benchmark capability of the virial expansion as discussed in this work may serve as a criterion to check the accuracy of numerical approaches like DFT-MD simulations to evaluate the conductivity. II. VIRIAL EXPANSION OF THE INVERSE CONDUCTIVITY. Charge-neutral hydrogen plasma (ion charge Z = 1) in thermodynamic equilibrium is characterized by temperature T and the mass density n, or the total particle number densities of electronsn e which equals that of the ionsn ion . Instead, dimensionless parameters can be introduced: the plasma parameter which characterizes the ratio of potential to kinetic energy in the non-degenerate case, and the electron degeneracy parameter The dc conductivity σ(n, T ) is usually related to a dimensionless function σ * (n, T ) according to Ωm In this work, we consider both σ and σ * as function of density n at fixed temperature T . In the low-density limit, the following virial expansion for the inverse conductivity ρ * (n, T ) = 1/σ * (n, T ) was obtained from kinetic theory and generalized linear response theory [13][14][15]: ρ * (n, T ) = ρ 1 (T ) ln 1 n +ρ 2 (T )+ρ 3 (T ) n 1/2 ln 1 n +. . . (4) In contrast to a simple expansion in powers of n, the occurrence of terms with ln n and n 1/2 ln n is due to the long-range character of the Coulomb interaction. To describe the collisions between the charged particles, an integral over the Coulomb interaction occurs which gives the so-called Coulomb logarithm, where screening of the Coulomb interaction is taken into account. Typically, such a Coulomb logarithm arises in the correlation functions within the generalized linear response theory [13][14][15]. By convention, virial expansions consider the dependence of physical quantities on the density n, for instance a power series expansion. However the density n has a dimension, and for ρ * to be not depending on units, the virial coefficients ρ i have also in general a dimension. In particular, the term ρ 1 ln(1/n) needs a compensating term ρ 1 ln(A), where A has the dimension of density, as a contribution to ρ 2 so that ρ * remains dimensionless. Usually relations like (4) are given after fixing the units in which the physical quantities are measured, but it is also convenient to introduce dimensionless variables. For motivation, we consider the Born approximation for the Coulomb logarithm. Within static (Debye) screening of the Coulomb interaction to avoid the divergence owing to distant collisions, the Born approximation of the Coulomb logarithm leads to the result, see [13][14][15], The Debye screening parameter in the low-density (nondegenerate) limit reads so that the integral depends only on the parameter We focus on the first and second term on the right hand side of Eq. (5) which is sufficient in order to derive the first [ρ 1 (T )] and second virial coefficient [ρ 2 (T )] of the virial expansion (4). Further contributions are of higher order in density; for Γ/Θ ≤ 0.01 they contribute to the integral in Eq. (5) by less than 1 %. In the virial expansion (4), the logarithm can be transformed by introducing the dimensionless parameter see Eq. (5), and we find a modified expression [note that To find the relation betweenρ i and ρ i we replace in Eq. (9) the variables Θ, Γ by n, T according to Eq. (8) so that Comparing with Eq. (4) we findρ 1 = ρ 1 and where a B is the Bohr radius and T Ryd = k B T /13.6 eV is the temperature measured in Rydberg units. A highlight of plasma transport theory is that the exact value of the first virial coefficient is known for Coulomb systems from the seminal paper of Spitzer and Härm [11], which does not depend on T . Note that Eq. (12) takes into account the contribution of the electron-electron (e − e) interaction. In contrast, for the Lorentz plasma model where the e−e collisions are neglected so that only the electron-ion interaction is considered, the first virial coefficient amounts to Although e − e collisions do not contribute to a change of the total momentum of the electrons because of momentum conservation, the distribution in momentum space is changed ("reshaping") so that higher moments of the electron momentum distribution are not conserved. The indirect influence of e − e collisions on the dc conductivity is clearly seen in generalized linear response theory where these higher moments are considered; see [15]. For the second virial coefficient ρ 2 (T ) orρ 2 (T ), no exact value is known. It depends on the treatment of many-particle effects, in particular screening of the Coulomb potential. Within a quantum statistical approach, the static (Debye) screening by electrons and ions, see Eq. (5), should be replaced by a dynamical one. For hydrogen plasma as considered here, the Born approximation for the collision integral is justified at high temperatures T Ryd 1. Considering screening in the random-phase approximation leads to the quantum Lenard-Balescu (QLB) expression. Thus, at very high temperatures where the dynamically screened Born approximation becomes valid, we obtain the QLB result, see [27][28][29], With decreasing T , strong binary collisions (represented by ladder diagrams) become important which have to be treated beyond the Born approximation when calculating the second virial coefficientρ 2 (T ). According to Spitzer and Härm [11], the classical treatment of strong collisions with a statically screened potential gives for Interpolation formulas have been proposed connecting the high-temperature limitρ QLB 2 with the lowtemperature Spitzer limit. Instead, performing the sum of ladder diagrams with the dynamically screened Coulomb potential, Gould and DeWitt [30] and Williams and DeWitt [31] proposed approximations where the lowest order of a ladder sum with respect to a statically screened potential, the Born approximation, is replaced by the Lenard-Balescu result which accounts for dynamic screening. An improved version was proposed in Refs. [14,32] by introducing an effective screening parameter κ eff such that the Born approximation coincides with the Lenard-Balescu result, see also [13][14][15]34]. Based on a T-matrix calculation in quasiclassical (Wentzel-Kramers-Brillouin, WKB) approximation [35,36], the expression (temperature is given in eV: (16) can be considered as simple interpolation which connects the QLB result with the Spitzer limit in WKB approximation. However, the exact analytical form of the temperature dependence of the second virial coefficientρ 2 (T ) remains an open problem. Thus, the available exact results for the virial expansion (9) of the resistivity of hydrogen plasma are: (i) the value of the first virial coefficient isρ 1 = 0.846; (ii) the second virial coefficient has the high-temperature limit lim T →∞ρ2 (T ) = 0.4917; (iii) the second virial coefficient is temperature dependent, a promising functional form is given by Eq. (16). III. VIRIAL COEFFICIENTS FROM ANALYTICAL APPROACHES. To extract the first and second virial coefficient from calculated or measured dc conductivities, we plot the ex- pressioñ as function of x = 1/ ln(Θ/Γ) and T in Fig. 1 which is denoted as virial plot. According to Eq. (4), the behavior of any isotherm (fixed T ) near n → 0 is linear, withρ 1 (T ) as the value at x = 0 andρ 2 (T ) as the slope of the isotherm. As discussed above in context of the Born approximation (5), for x > 1/ ln(100) = 0.217 the contributions of higher-order virial coefficients have to be taken into account. In addition, at fixed T , in the lowdensity region where Θ 1, the plasma is in the classical limit, and effects of degeneracy are obtained from higherorder virial coefficients. In Fig. 1 three cases for the first virial coefficient ρ 1 are shown at the axis of ordinate, see also [13][14][15] for the force-force correlation function as known from the Ziman theory (FF, Ziman). In addition, the second virial coefficientρ LB 2 of the Lenard-Balescu approximation (14) is shown as black broken straight line which is expected to be correct in the high-temperature limit. Two QLB calculations of Desjarlais et al. [27] are shown in Fig. 1, see also [39]. The line including e − e collisions obeys the same asymptote (x → 0) as that of Karakhtanov [28]. With increasing x = 1/ ln(Θ/Γ), small deviations from the linear behavior are seen. The line for calculations without e − e collisions (Lorentz plasma) points to the corresponding asymptote given by ρ Lorentz 1 . Recently, the transport properties of hydrogen plasma were compiled in Ref. [25]. For a grid of lattice points in the n-T plane (considering n = 0.1, 1, 10, 100 g/cm 3 and T eV = 0.2, 2, 20, 200, 2000) the results of different approaches were given. Large deviations were obtained which indicate not only unavoidable numerical uncertainties but also deficits in some of the theoretical approaches. Their consistency can be checked via the virial expansion as benchmark. As an example, we show data of Clérouin et al. and of Copeland for the isotherm T eV = 2000 taken from Ref. [25] in Fig. 1. Extrapolating to x = 1/ ln(Θ/Γ) → 0, these hightemperature isotherms show already significant differences. The data of Clérouin et al. point to the correct Spitzer limit ρ Spitzer 1 , including e − e collisions, but have a rather steep slope. This may be caused by the approximations in treating dynamical screening and the ionic structure factor, in contrast to a strict QLB calculation. The data of Copeland clearly point to the limit ρ Lorentz 1 of the Lorentz model, i.e., this approach does not include e − e collisions and fails to describe the conductivity of hydrogen plasma correctly. Also shown in Fig. 1 are analytical results for the dc conductivity of hydrogen plasma presented in Lambert et al. [38] at the lowest density n = 10 g/cm 3 . The data denoted by Hubbard [40] are close to the data of Clérouin et al. discussed above. The asymptote is the correct benchmark ρ Spitzer 1 , but the slope is rather large. The data of Lee and More [41] are closer to the QLB calculations. In contrast to Copeland who also claims to use the Lee-More approach, possibly the e − e collisions are added so that the extrapolation to x → 0 is near to the correct benchmark ρ Spitzer 1 . Because of the approximations in evaluating the Coulomb logarithm, deviations from the QLB result are seen. The kink in the Lee-More and Hubbard data seen in Fig. 1 is due to switching the minimum impact parameter in the Coulomb logarithm from the classical distance of closest approach to the quantum thermal wave length, cf. Ref. [24]. Ichimaru and Tanaka [42] derived an analytical expression for the conductivity which has been improved in [43] by adding a tanh-term to the Coulomb logarithm. The latter expression has been used also in Ref. [38], the isochore n = 10 g/cm 3 is shown in Fig. 1. The approach is based on a single Sonine polynomial approximation where the effect of e − e collisions is not taken into account. The empirical fit of Kitamura and Ichimaru [43] approximates the conductivity for degenerate plasmas, see also Fig. 9 of Ref. [38]. However, in the low-density limit this approach fails to describe the conductivity approaching ρ Ziman 1 at x = 0. IV. VIRIAL REPRESENTATION OF DFT-MD SIMULATIONS. DFT-MD simulations are of great interest, since they do not suffer from the restrictions of perturbation theory as typical for analytical results and can directly be confronted with the virial expansion. In addition, with the virial expansion the results can be extrapolated to the low-density region were DFT-MD simulations become infeasible. In this work, we present new DFT-MD results for the electrical conductivity of hydrogen obtained from an evaluation of the Kubo-Greenwood formula [12,17,45,46]. The 125-atom simulations are performed with the Vienna ab initio simulation package (VASP) [49][50][51] using the exchange-correlation functional of Perdew, Burke, and Ernzerhof (PBE) [52] and the provided Coulomb potential for hydrogen. The time steps were chosen between 0.2 and 0.1 fs and the simulations ran for at least 4000 time steps. The ion temperature is controlled with a Nosé-Hoover thermostat [53]. For all simulations, the reciprocal space was sampled at the Baldereschi mean value point [54]. Special attention has been paid to convergence with respect to the particle number. Additional details of the simulations are given in the supplemental material and the results are given in Tab.I. Our DFT-MD results are plotted in Fig. 2 and show a general increase with an increasing x = 1/ ln(Θ/Γ). In comparison, the virial plot contains previous DFT-MD conductivity data [27,38], which were translated into our ρ framework. The first set of previous DFT-MD calculations has been published by Lambert et al. [38] which were also used by Starrett [47]. Results forρ for the lowest values of x > 0 at three different densities are given in Fig. 2. Inspecting Fig. 2 200 eV and for 160 g/cm 3 at 800 eV are close together, i.e., we see a dominant dependence on x, no additional density or temperature effect is seen. They are also close to the Lee-More approach including e − e collisions so that they are not in conflict with the correct benchmark (KT, Spitzer). Calculations are based on a formulation of the Kubo-Greenwood method for average atom models neglecting the ion structure factor [48] so that these QMD values are possibly also influenced by approximations and, therefore, deviate slightly from other calculations. However, the parameter values x are too large to estimate the virial expansion. The second set of previous DFT-MD simulations for hydrogen plasma in the low-x region were given by Desjarlais et al. [27], see Fig. 2. For a density of 40 g/cm 3 , three temperatures T eV = 500, 700, and 900 were considered. The reduced resistivityρ 1 (x, T ) approaches the benchmark obtained from the QLB calculations. However, the linear extrapolation to ρ Spitzer 1 at x = 0 is not seen by these data. Interestingly, the results forρ of the different DFT-MD simulations do not follow approximately a single curve as expected from the high-temperature limit of the virial expansion. The values of Lambert et al. are significantly higher than ours but the slope is almost the same. While we employ the generalized gradient approximated exchange-correlation energy of PBE [52], Lambert et al. use the local density approximation. They used orbitalfree molecular dynamics in order to simulate the system and obtain various snapshots for each density-pressure point. Subsequently, these snapshots were evaluated via the Kubo-Greenwood formula using the Kohn-Sham code Abinit, which is equivalent to our used VASP implementation. The DFT-MD simulations of Desjarlais et al. [27] are close to our results, but the slope of the virial plot is quite different. DFT-MD simulations are usually performed at high densities where the electrons are degenerate so that e − e collisions can be neglected. In the low-density region (x < 1) as considered here, we could improve the accuracy by studying the convergence of the DFT-MD results, in particular with respect to particle number and cut-off energy, using high-performance computing facilities. A long-discussed problem in this context is the question whether or not e − e collisions are taken into account within the DFT-MD formalism. For example, in Ref. [37] it was pointed out that a mean-field approach is not able to describe two-particle correlations, in particular e − e collisions. However, e − e interaction is taken into account by the exchange-correlation energy as shown in Ref. [27] by comparing DFT-MD data for the electrical conductivity to QLB results. The calculations of Desjarlais et al. [27] for n = 40 g/cm 3 and our present ones for n = 2 g/cm 3 were computationally demanding but are still not very close to x = 0 so that extrapolation to the limit x = 0 is not very precise. However, the corresponding slopes are quite different: while the present DFT-MD data favor ρ Lorentz 1 as asymptote at x = 0, those of Ref. [27] seem to point to the Spitzer value, Eq. (12). Thus, our results do not solve the lively debate on whether or not DFT-MD simulations include the effect of e − e collisions on the conductivity or not. We conclude that further DFT-MD simulations have to be performed for still higher temperatures and/or lower densities in order to approach the limit x → 0 so that the value for ρ 1 can be derived more accurately. Such simulations, e.g. for densities below 1 g/cm 3 , are computationally very challenging using the Kohn-Sham DFT-MD method so that alternative schemas like stochastic DFT [57] or the spectral quadrature method [58] have to be applied for this purpose. We would like to mention that in the case of thermal conductivity it has been shown that the contribution of e − e collisions is not taken into account in DFT-MD simulations [27] and gives an additional term. A profound discussion on the mechanism of e − e collisions has been given recently by Shaffer and Starrett [24]. They argued that the precise nature of the incomplete account of e − e scattering may be resolved by methods going beyond the Kubo-Greenwood approximation such as time-dependent DFT or GW corrections. Considering a quantum Landau-Fokker-Planck kinetic theory, their main issue is that scattering between particles in a plasma should be described not by the Coulomb interaction but by the potential of mean force. Obviously, if part of the interaction is already taken into account introducing quasiparticles and mean-field effects, the correspond-ing contributions must be removed from the Coulomb interaction for e − e scattering to avoid double counting. Comparing with QMD results, Shaffer and Starrett [24] point out that their findings support the conclusions of Ref. [27] that the Kubo-Greenwood QMD calculations contain the indirect electron-electron reshaping effect relevant to both the electrical and thermal conductivity, but they do not contain the direct scattering effect which further reduces the thermal conductivity. V. EXPERIMENTS. Ultimately, the virial expansion (9) has to be checked experimentally but accurate data for the conductivity of hydrogen plasma in the low-density limit and/or at high temperatures are scarce. Accurate conductivity data for dense hydrogen plasma were derived by Günther and Radtke [44] which are shown in the virial plot, Fig. 2. They are close to the benchmark data of the virial expansion. Note that systematic errors are connected with the analysis of such experiments. For instance, the occurrence of bound states requires a realistic treatment of the plasma composition and of the influence of neutrals on the mobility of electrons. Alternatively, conductivity measurements in highly compressed rare gas plasmas have been performed by Ivanov et al. [55] and Popovic et al. [36,56], but the interaction of the electrons with the ions deviates from the pure Coulomb potential owing to the cloud of bound electrons. The corresponding virial plot is close to the data of hydrogen plasma, see [39], but requires a more detailed discussion with respect to the role of bound electrons. VI. CONCLUSIONS. We propose an exact virial expansion (9) for the plasma conductivity to analyze the consistency of theoretical approaches. For instance, several analytical calculations of the dc conductivity σ(T, n) presented in Ref. [25] miss this strict requirement and fail to give accurate results. Results of DFT-MD simulations are presently considered to be most reliable, and future PIMC simulations can be tested by benchmarking with the virial expansion (9) for x → 0. Note that these ab initio simulations become computationally challenging in the low-density region, but the virial expansion allows the extrapolation into this region. The construction of interpolation formulas is possible, see [36], if the limiting behavior for n → 0 and further data in the region of larger densities not accessible for analytical calculations are known. An outstanding problem that could potentially be addressed by applying the virial expansion of the conductivity is the question whether or not the e − e collisions are rigorously taken into account. Despite the work presented in [24,27], there is no final proof whether the Kubo-Greenwood QMD calculations with the standard expressions for the exchange-correlation energy functional give the exact value for the plasma conductivity in the low-density limit. A Green's function approach may solve this problem but this has not been performed yet. Therefore, we suggest to apply our benchmark criterium on future large data sets of Kubo-Greenwood QMD calculations to investigate the contribution of e−e collisions in the low-density limit. The approach described here is applicable also to other transport properties such as thermal conductivity, thermopower, viscosity, and diffusion coefficients. Of interest is also the extension of the virial expansion to elements other than hydrogen, where different ions may be formed and the electron-ion interaction is no longer pure Coulombic.
2021-09-24T01:15:31.261Z
2021-09-23T00:00:00.000
{ "year": 2021, "sha1": "c4b68a725eca5abf81d5f4b67fae68ca0f4bba2e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2109.11293", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c4b68a725eca5abf81d5f4b67fae68ca0f4bba2e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
16070620
pes2o/s2orc
v3-fos-license
Emergence of Carbapenemaseproducing Klebsiella Pneumoniae of Sequence type 258 in Michigan, USA The prevalence of carbapenemase-producing Enterobacteriaceae (CPE) in our hospital increased beginning in 2009. We aimed to study the clinical and molecular epidemiology of these emerging isolates. We performed a retrospective review of all adult patients with clinical cultures confirmed as CPE by positive modified Hodge test from 5/2009- 5/2010 at the University of Michigan Health System (UMHS). Clinical information was obtained from electronic medical records. Available CPE isolates were analyzed by polymerase chain reaction (PCR) and sequencing of the 16S rRNA encoding gene and blaKPC locus. Multilocus sequence typing (MLST) was used to characterize Klebsiella pneumoniae isolates. Twenty six unique CPE isolates were obtained from 25 adult patients. The majority were Klebsiella pneumoniae (n=17). Other isolates included K. oxytoca (n=3), Citrobacter freundii (n=2), Enterobacter cloacae (n=2), Enterobacter aerogenes (n=1) and Escherichia coli (n=1). Molecular characterization of 19 available CPE isolates showed that 13 (68%) carried the KPC-3 allele and 6 (32%) carried the KPC-2 allele. Among 14 available K. pneumoniae strains, 12 (86%) carried the KPC-3 allele and belonged to a common lineage, sequence type (ST) 258. The other 2 (14%) K. pneumoniae isolates carried the KPC-2 allele and belonged to two unique STs. Among these ST 258 strains, 67% were isolated from patients with prior exposures to health care settings outside of our institution. In contrast, all CPE isolates carrying the KPC-2 allele and all non ST 258 CPE isolates had acquisition attributable to our hospital. Molecular epidemiology of carbapenemase producing K. pneumoniae suggests that KPC-3 producing K. pneumoniae isolates of a common lineage, sequence type (ST 258), are emerging in our hospital. While ST 258 is a dominant sequence type throughout the United States, this study is the first to report its presence in Michigan. Introduction Antibiotic resistance presents a significant risk for patients and healthcare providers. The global dissemination of multidrug resistant bacteria limits the utility of commonly used antimicrobials. Infections caused by these organisms are increasing in health care settings, posing challenges to clinical microbiology laboratories in rapidly identifying resistant organisms, to clinicians in choosing appropriate antimicrobial treatments and to infection preventionists in implementing effective infection control interventions. Carbapenems have become the treatment of choice for serious infections by gram negative hospital associated pathogens including extended-spectrum β-lactamase (ESBL) producing Enterobacteriaceae. 1 Increased use of carbapenems has been predictably followed by the arrival of carbapenem resistance. Carbapenem resistance in Enterobacteriaceae has been largely due to acquisition of carbapenemase genes belonging to Ambler classes A, B and D β-lactamases. 2 Klebsiella pneumoniae carbapenemases (KPCs) are the most common plasmid encoded Class A carbapenamases. In recent years, many variants of KPC genes (blaKPC) have been reported. 3 Carbapenemase producing Enterobacteriaceae (CPE) have emerged as important nosocomial pathogens and spread globally with a marked endemicity in the eastern United States, Israel, and Greece. 4,5 Among healthcare associated infections reported to the Centers for Disease Control and Prevention, 8% of Klebsiella isolates were carbapenem resistant in 2007 compared to fewer than 1% in 2000. 6 A particular clonal lineage of carbapenemase producing Klebsiella pneumoniae, sequence type (ST) 258, has been commonly associated with outbreaks in many countries, 7,8 suggesting that this epidemic clone may have contributed to spread of the blaKPC genes. 9,10 Although CPE isolates are endemic to southeastern Michigan, 11 12 The purpose of this study was to investigate the clinical and molecular epidemiology of CPE isolates and to identify possible clonal spread of CPE in our hospital. including patient demographics, co-morbidities, health care exposures (invasive devices, antibiotics, prior hospitalization or stay in long term care facilities) in the 90 days prior to culture, current hospitalization length of stay, antimicrobial treatment, and 90 day mortality outcomes were abstracted from electronic medical records. CPE isolates were considered hospital acquired (nosocomial to UMHS) if the culture was obtained 48 hours or more after admission to UMHS unless there was evidence of prior CPE isolation from the same patient at another hospital. Appropriate treatment was defined as the administration of at least one antibiotic to the patient that demonstrated in vitro activity against the infecting CPE isolate. When carbapenem resistance was identified by the clinical microbiology laboratory, patients were placed in contact precautions and charts were electronically flagged in case of readmission. If a patient was transferred to other health care facilities, the status was conveyed to the accepting facility. However due to the retrospective nature of the study, molecular testing results were not available and were not reported to other institutions. CPE isolates were identified by the UMHS clinical microbiology laboratory based upon phenotypic and antimicrobial susceptibility testing of Enterobacteriaceae using the Vitek-2 system (bioMérieux, Durham, NC). Isolates with ertapenem MIC ≥2 mg/L were subjected to ertapenem disc diffusion testing. If the zone of inhibition was ≤22 mm, then the modified Hodge test was performed to phenotypically confirm carbapenemase production in accordance with Clinical and Laboratory Standard Institute (CLSI) criteria. 13 Only CPE isolates confirmed by positive modified Hodge test were included in the study. Materials and Methods Bacterial DNA was extracted from overnight Mueller-Hinton broth cultures of CPE isolates using the Easy-DNA TM Kit (Invitrogen-Carlsbad, CA). The taxonomy of all available isolates was determined by amplification and sequencing of the 16S rRNA encoding gene, followed by BLAST (Basic Local Alignment Search Tool) querying the National Center for Biotechnology Information (NCBI) nucleotide database (http://blast.ncbi.nlm.nih.gov/ Blast.cgi). Briefly, broad range primers (8F and 1492R) were used to amplify ~1493 base pair region of the 16S rRNA encoding gene with high-fidelity taq polymerase (AmpliTaq Gold Master Mix, Applied Biosystems, Inc). 14 Reaction mixtures were set up with 1 µL of template DNA (approximately 100 ng), 10 pmol of each primer, master mix, and water to a total volume of 25 µL. PCR was performed in Eppendorf Master cycler thermocycler with the following cycling conditions: initial denaturation at 95°C for 5 min followed by 35 cycles of denaturation at 95°C for 30 seconds, annealing at 58°C for 30 seconds, and extension at 72°C for 1.5 min. A final extension at 72°C for 10 min was performed. Amplicons were purified (QIA quick PCR Purification Kit, Qiagen, Inc) and sequenced bi-directionally (forward and reverse directions) using standard Sangerstyle sequencing on an ABI 3730XL capillary sequencer. Raw sequences were trimmed, aligned, and edited using the SeqMan II program of the DNASTAR Lasergene 7 package (DNASTAR, Inc., Madison, WI) and consensus sequences were submitted to BLAST. All taxonomic identifications were at least 99% identical to representative sequences in the NCBI database. Allelic variants of the blaKPC gene were identified using methods published by Bradford et al. 15 Multilocus sequence typing (MLST) was performed to characterize K. pneumoniae genotypes as described on the K. pneumoniae MLST website (http://www.pasteur.fr/recherche/genopole/PF8/mlst/Kpneumo niae.html). Briefly, internal fragments of seven housekeeping loci were amplified and sequenced at 2x coverage (i.e. each nucleotide site was covered twice). Raw sequences were trimmed, aligned, and edited using SeqMan II and consensus reads were compared using MEGA5. 16 K. pneumoniae sequence types were identified by searching allele profiles of the K. pneumoniae MLST website. Discussion The CPE population at our institution has a unique molecular epidemiology. Notably, all non-ST 258 CPE isolates were considered nosocomial to our institution. These isolates were taxonomically diverse ( Figure 1) and with a single exception (E. hormaechei) all carried the KPC-2 allele. A previous study from our hospital showed evidence of a common KPC-2 carrying plasmid among two genera of Enterobacteriaceae. 12 Recently other studies have shown evidence of horizontal transfer of blaKPC-carrying plasmids across different strains, species and genera of family Enterobacteriaceae. [17][18][19] Similarly, carriage of the KPC-2 gene by two unique isolates belonging to different genera from the same patient in our study could also be due to horizontal intergenus plasmid transfer which has been described previously. [17][18][19] Further studies including plasmid and transposons characterizations are required on our isolates to support this hypothesis. While 60% of CPE patients had prior ICU stay within the prior 90 days and 92% had prior hospitalization within the prior 90 days, we were unable to identify any epidemiological link among patients with nosocomial acquisition of CPE in our hospital. All 12 ST 258 K. pneumoniae isolates found at our institution carried the KPC-3 allele and among these, 67% were isolated from patients with prior exposures to health care settings outside of our institution. While all KPC-2 CPE isolates and all non ST 258 CPE isolates had acquisition attributable to our hospital, only 33% of ST 258 K. pneumoniae isolates were considered to have acquisition attributable to our hospital. This suggests that the ST 258 lineage is not yet endemic at our institution but is circulating in the surrounding geographical region. 8 These results have important implications in understanding regional dissemination of CPE isolates and highlight the likely role of the frequent transfer of patients among multi- ple health care facilities in propagating crosstransmission and rapid regional spread of these bacteria. 8,11,20 K. pneumoniae ST 258 are prevalent throughout the United States and globally, 8 but its presence in Michigan has not been previously reported. Moreover, ST 258 isolates can carry either KPC-3 or KPC-2 genes, 8 so the observation that all carbapenemase producing K. pneumoniae isolates at our institution carried KPC-3 genes suggests the emergence of a single clone in the region. More data from other regional health care facilities are needed to adequately address this hypothesis. Article We observed an overall mortality of 16% which is lower (32-44%) than some reports. 11,21 This lower mortality may reflect the urinary source of most (58%) of our CPE isolates, as a previous study found that patients were at a decreased risk of in-hospital mortality when CPE were isolated from urine compared to other sources. 11 In addition we did not attempt to differentiate between clinical infection and colonization with CPE, which may have influenced patient mortality outcomes. However we observed lower mortality (20%) even among bacteremic patients. The main limitation of our study was the small sample size of the CPE patient cohort and the availability of even fewer isolates for molecular analysis. Since the study design was descriptive, we were unable to identify specific risk factors or attributable mortality associated with CPE in our patients. However, our patients also had high rates of prolonged exposure to health care environments including ICUs and broad spectrum antibiotics, consistent with prior studies. 20,22,23 Conclusions Our data suggest the transmission of the blaKPC gene in our geographic region by both clonal spread (ST 258) and potentially horizontal transfer. These complex modes of resistance transmission have significant public health and epidemiological implications. 17,24 Further studies including molecular sub typing (e.g. multi locus variable tandem number repeat analysis) and plasmid characterization are needed to understand emergence and transmission of these clones. Due to limited therapeutic options and poor outcomes associated with these resistant organisms, heightened awareness and prompt detection of CPE emergence at the institutional and regional level is necessary to direct infection control and antimicrobial stewardship efforts to limit the spread of these pathogens.
2016-05-12T22:15:10.714Z
2013-01-22T00:00:00.000
{ "year": 2013, "sha1": "895e74299e0f8bf4273991b020f23ae250d1c34a", "oa_license": "CCBYNC", "oa_url": "https://www.pagepress.org/journals/index.php/idr/article/download/idr.2013.e5/3901", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "895e74299e0f8bf4273991b020f23ae250d1c34a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119278915
pes2o/s2orc
v3-fos-license
Inverse wave scattering in the time domain: a factorization method approach Let $\Delta_{\Lambda}\le \lambda_{\Lambda}$ be a semi-bounded self-adjoint realization of the Laplace operator with boundary conditions assigned on the Lipschitz boundary of a bounded obstacle $\Omega$. Let $u^{\Lambda}_{f}$ and $u^{0}_{f}$ be the solutions of the wave equations corresponding to $\Delta_{\Lambda}$ and to the free Laplacian $\Delta$ respectively, with a source term $f$ concentrated at time $t=0$ (a pulse). We show that for any fixed $\lambda>\lambda_{\Lambda}\ge 0$ and $B\subset\subset{\mathbb R}^{n}\backslash\overline\Omega$, the obstacle $\Omega$ can be reconstructed by the data $$ F^{\Lambda}_{\lambda}f(x):=\int_{0}^{\infty}e^{-\sqrt\lambda\,t}\big(u^{\Lambda}_{f}(t,x)-u^{0}_{f}(t,x)\big)\,dt\,,\qquad x\in B\,,\ f\in L^{2}({\mathbb R}^{n})\,,\ \mbox{supp}(f)\subset B\,. $$ A similar result holds in the case of screens reconstruction, when the boundary conditions are assigned only on a part of the boundary. Our method exploits the factorized form of the resolvent difference $(-\Delta_{\Lambda}+\lambda)^{-1}-(-\Delta+\lambda)^{-1}$. Introduction. We consider the problem of obstacles' reconstruction from measurements of time-dependent scattered waves. Different approaches have been developed; in [13] time-Fourier transform is used to process data in the frequency domain via the point source method. In [6] the case of Dirichlet obstacles is considered in the time-dependent setting by using measurements of causal waves, that is, waves such that u(t, x) = 0 for t ≤ T (see also [9] for Neumann and Robin obstacles). In these works a linear sampling method is adapted to work on timedomain data without using the Fourier transform. As remarked in [6], this approach may offer a better quality of the reconstruction compared to frequency domain methods working with a single frequency; nevertheless, an approximation argument prevents a rigorous mathematical characterization of the obstacle. A different strategy, based on an adaptation of the factorization method to the time-domain, has been recently proposed in [5]. There, the authors introduce a far field operator for Dirichlet obstacles scattering in the time dependent setting; the inverse data are given by measurements of scattered causal waves in the far field regime. Using Laplace transform analysis of retarded potentials, the analytical framework to study this factorization is provided. However, to obtain a symmetric factorization with coercive middle operator (needed to implement the factorization method), a perturbed far field operator, arising from artificially modified measurements, is introduced. In our recent paper [16], we used Kreȋn-type resolvent formulae, combined with the limiting absorption principle and the factorization method, to provide inverse scattering (reconstrucion) results for Lipschitz obstacles and screens. In what follows analogous results are obtained for the time-dependent obstacle scattering problem; our approach exploits the Laplace transform analysis of the time-propagator leading to a factorized representation of the data operator in the Laplace transform domain. Then, using sampling methods, obstacles and screens can be reconstructed by the knowledge, for some fixed λ > 0 and B ⊂⊂ R n \Ω, of the data where u Λ f and u 0 f solve the inhomogeneous wave equations for ∆ Λ and ∆ respectively with a pulse f concentrated at time t = 0 (see Theorems 4.2 and 4.3 for the precise statements). Here ∆ is the self-adjoint Laplacian in the whole space, describing free waves propagation, while ∆ Λ is the self-adjoint realization of the Laplacian with boundary conditions on (part of) the boundary Γ of the obstacle Ω which are univocally individuated by the choice of the operator Λ acting on functions on Γ. The allowed Λ's permit to considers all the standard local boundary conditions, in particular Dirichlet, Neumann, Robin, semi-transparent ones, either assigned on the whole boundary Γ (see Section 2.1) or on a relatively open piece Σ (see Section 2.2). Our modeling is inspired by a standard experimental setup where the incident wave is generated by pulses space-localized in a fixed bounded region B and the measurements are performed by detectors placed in the same domain B (see for instance [3]). This setting differs from the one proposed in [6] and [5], where the incident fields are not all localized in a single domain, while (concerning [5]) the data operator output, i.e. the physical measure, depicts the behaviour of scattered fields at far distances and large times. A relevant feature of our approach rests upon the fact that it avoids unphysical modifications of the data and provide a rigourous reconstruction algorithm (implying uniqueness). As in the aforementioned works, we require global-in-time data; this is due to the use of the Laplace transform. The error introduced by using finite-time data is considered: we provide an estimate regarding the difference (in uniform operator norm) between the experimentally realistic operator F Λ,ε,t• λ defined in term of pulses concentrated on small time intervals [0, ε], ε ≪ 1, and measurements lasting a finite time t • ≫ ε, and the ideal operator F Λ λ corresponding to the limit case t • = +∞, ε = 0. In Lemma 3.1 we show that it is of order (e −λt• + ε)λ −1/2 ; thus it can be made arbitrarily small by taking λ sufficiently large. Acknowledgements. The authors are indebted to Guanghui Hu for the fruitful discussions which largely inspired this work. 2. The resolvent formula for Laplacians with boundary conditions. be the self-adjoint operator given by the free Laplacian on the whole space; here H 2 (R n ) denotes the usual Sobolev space of square integrable functions with square integrable distributional Laplacian. Another self-adjoint operator ∆ : dom( ∆) ⊆ L 2 (R 3 ) → L 2 (R 3 ) is said to be a singular perturbation of ∆ if the set . In concrete situations ∆ represents the Laplace operator with some kind of boundary conditions on a null subset. We notice that ∆ is a self-adjoint extension of the symmetric operator ∆ • given by restricting the free Laplacian to the set defined in (2.1). Therefore the singular perturbations of the Laplacian can be realized as selfadjoint extensions of the symmetric operators given by the restrictions of the Laplacian ∆ to subspaces which are dense in L 2 (R n ) and closed in H 2 (R n ). Without loss of generality we can suppose that such subspaces are the kernels of some bounded linear maps. This leads to introduce the following framework. Given an auxiliary Hilbert space K, we introduce a linear application τ : H 2 (R 3 ) → K which plays the role of an abstract trace (evaluation) map. We assume that 1. τ is continuous; 2. τ is surjective (so that K plays the role of the trace space); 3. ker(τ ) is dense in L 2 (R 3 ). In the following we do not identify K with its dual K * ; however we use K * * ≡ K. We suppose that there exists a Hilbert space K 0 and continuous embeddings with dense range K ֒→ K 0 ֒→ K * ; then the K-K * duality ·, · K * ,K (conjugate-linear with respect to the first variable) is defined in terms of the scalar product of K 0 . For any z in the resolvent set ρ(A 0 ), we define the bounded operators . Then, given a couple of reflexive Banach spaces X, Y, with K ֒→ X (and hence X * ֒→ K * ), and given P ∈ B(X, Y), we consider a family of maps M z ∈ B(X * , X), z ∈ C\(−∞, 0], such that is not empty, we can define the map Λ : Z M → B(X, X * ) , z → Λ z := P * (P M z P * ) −1 P . By (2.3) one gets (see [19, relation (2) and (4)]) is the resolvent of a singular perturbation ∆ Λ of ∆ and Z M = ρ(∆ Λ ) ∩ C\(−∞, 0]. In the next sections, given an open, bounded set Ω ≡ Ω in ⊂ R n with a Lipschitz boundary Γ and such that Ω ex := R n \Ω is connected, we consider models where the map τ : H 2 (R n ) → K corresponds to one of the following three different cases: Here γ 0 and γ 1 denote the Dirichlet and Neumann traces on the boundary Γ and H s (Γ), |s| ≤ 1, denotes the Hilbert space of Sobolev functions of order s on Γ (see, e.g., [14,Chapter 3]); the Hilbert space B 3/2 2,2 (Γ) is a Besov-like space (see [10, Section 2, Chapter V] for the precise definitions) giving the correct trace space of γ 0 |H 2 (R n ) in the case Γ is a Lipschitz manifold (whenever Γ is more regular it identifies with H 3/2 (Γ)). We introduce the label ♯ = D, N, DN according to the possible different choices 1) -3) above. The operators defined in (2.2) in terms of one of the traces We also introduce the spaces K = K ♯ , X = X s ♯ , where where SL z and DL z are the single-and double-layer operators associated to (−∆ + z) (see, e.g., [14, Chapter 6]). 2.1. Laplace operators with boundary conditions on hypersurfaces. In this section we use Theorem 2.1 in the case Y = X and P = 1 X . 2.1.1. The Dirichlel Laplacian. Let ∆ D Ω in/ex be the self-adjoint operators in L 2 (Ω in/ex ) corresponding to the Laplace operator with Dirichlet boundary conditions. One has ∆ D (see [15,Section 5.2]). 2.1.2. The Neumann Laplacian. Let ∆ N Ω in/ex be the self-adjoint operators in L 2 (Ω in/ex ) corresponding to the Laplace operator with Neumann boundary conditions. One has ∆ N Ω in ⊕ (see [15,Section 5.3]). 2.1.3. The Laplacian with semitransparent boundary conditions. Here α and θ are real-valued functions and we use the same symbols to denote the corresponding multiplication operators. Taking , the self-adjoint operator ∆ Λ α represents a bounded from above Laplace operator with the semi-transparent boundary conditions , p > 2, the self-adjoint operator ∆ Λ θ represents a bounded from above Laplace operator with the semi-transparent boundary conditions denotes the space of Hölder-continuous functions of order κ on Γ). Taking , λ b ≥ 0, the self-adjoint operator ∆ Λ b represents a bounded from above Laplace operator with local boundary conditions [18,Section 5.3]). Notice that, since γ in/ex 1 are both defined in terms of the outward normal vector, the case describing the same Robin boundary conditions at both sides of Γ corresponds to the In the case Γ is of class C 1,1 , the sign condition b 11 < 0 can be avoided (see [18,Section 5.3] where sign(α) is constant, the self-adjoint operator ∆ Λ α,Σ represents a bounded from below Laplacian with boundary conditions (2.7) on Σ (see [ the self-adjoint operator ∆ Λ b,Σ represents a bounded from below Laplacian with boundary conditions (2.9) on Σ (see [16,Section 5.1.3]). Wave Scattering in the time domain. Let ∆ Λ ≤ λ Λ , λ Λ ≥ 0, be a semi-bounded singular perturbation in L 2 (R n ) as defined in the previous section; we consider the Cauchy problem for the wave equation Here the index ♦ has the two possible values ♦ = 0 or ♦ = Λ and ∆ 0 identifies with the free Laplacian ∆ : H 2 (R n ) ⊂ L 2 (R n ) → L 2 (R n ); in the following we set λ 0 = 0. We say that u ∈ C(R + ; L 2 (R n )) is a mild solution of (3.1) whenever and for any t ≥ 0. By [2, Proposition 3.14, Corollary 3.14.8 and Example 3.14.16], the unique mild solution of (3.1) is given by where the B(L 2 (R n ))-valued functions t → Cos ♦ (t) and t → Sin ♦ (t) are univocally defined through the B(L 2 (R n ))-valued (inverse) Laplace transform by the relations One has (see [8, (6.14), (6.15), Chap. II]) Whenever λ ♦ = 0, by functional calculus one gets and so, in this case, Given χ ε a bounded, not negative function such that χ ε (s) = 0 whenever s ≥ ε > 0 and ε 0 χ ε (s) ds = 1, and given f ∈ L 2 (R n ), let u ♦ f,ε be the solution of the wave equation with the pulse χ ε f , i.e., u ♦ f,ε solves the inhomogeneous Cauchy problem In scattering experiments one measures the scattered wave f,ε (t) produced by the short pulse χ ε f , ε ≪ 1. Since the measurements last a finite time t • ≫ ε and detectors occupy a finite region, we introduce the continuous B(L 2 (B))-valued map \Ω is open and bounded and 1 X denotes the indicator function of a set X. We introduce the operator family F Λ,t•,ε λ given by which is the Laplace transform of (3.8). In an ideal setup, corresponding to instantaneous pulses and measurement lasting an infinite time, (3.8) rephrases as t → 1 B S Λ t 1 B , S Λ t := Sin Λ (t) − Sin 0 (t) . By Laplace transform again, we define the ideal operator The next Lemma shows that for any given ε > 0 and t • > 0, one can choose a sufficiently large λ such that the difference F Λ λ − F Λ,t•,ε λ is as small (in uniform operator norm) as one likes. Taking into account (3.5) and (3.6), here we set x −1 sinh xt = min{t, 1} whenever x = 0. 4.1. Obstacles reconstruction. Before stating our results, let us introduce the following family of functions in L 2 (R n ): where K ν denotes the modified Bessel function of the third kind of order ν. Notice that g x λ identifies with the fundamental solution of (−∆ + λ); in particular, whenever n = 3, , λ > λ Λ , and P = 1 X s ♯ . Given λ > λ Λ , B ⊂⊂ R n \Ω open and bounded, let F Λ λ be defined as in (3.10). If M λ is coercive, i.e., there exists c λ > 0 such that x ∈ Ω ⇐⇒ inf if M λ is sign-definite, i.e., there exists c λ > 0 such that one of the two inequalities providing the spectral resolution of F Λ λ are given in Lemma 4.1. Proof. By (3.4) and by the resolvent formula (2.6), one gets the factorized representation (in the case ♯ = DN, R Σ has to be replaced by R Σ ⊕ R Σ ) Since R Σ : X s ♯ → X s ♯ is an orthogonal projector (see [16,Lemma 5.1], the coercivity of M λ implies the coercivity of R * Σ M λ R Σ ; likewise if M λ is sign-definite then R * Σ M λ R Σ is signdefinite as well. Then (4.10) is consequence, by the inf-criterion in [12,Theorem 1.16], of Lemma 5.2 in the Appendix.
2019-03-14T16:57:37.000Z
2019-03-14T00:00:00.000
{ "year": 2020, "sha1": "4fe1328d7eb6b5e86549808483ed12b72f5e12c9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1903.06125", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4fe1328d7eb6b5e86549808483ed12b72f5e12c9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
119179460
pes2o/s2orc
v3-fos-license
On classes of C3 and D3 modules The aim of this paper is to study the notions of $\mathcal{A}$-C3 and $\mathcal{A}$-D3 modules for some class $\mathcal{A}$ of right modules. Several characterizations of these modules are provided and used to describe some well-known classes of rings and modules. For example, a regular right $R$-module $F$ is a $V$-module if and only if every $F$-cyclic module $M$ is an $\mathcal{A}$-C3 module where $\mathcal{A}$ is the class of all simple submodules of $M$. Moreover, let $R$ be a right artinian ring and $\mathcal{A}$, a class of right $R$-modules with local endomorphisms, containing all simple right $R$-modules and closed under isomorphisms. If all right $R$-modules are $\mathcal{A}$-injective, then $R$ is a serial artinian ring with $J^{2}(R)=0$ if and only if every $\mathcal{A}$-C3 right $R$-module is quasi-injective, if and only if every $\mathcal{A}$-C3 right $R$-module is C3. Introduction and notation. The study of modules with summand intersection property was motivated by the following result of Kaplansky: every free module over a commutative principal ideal ring has the summand intersection property (see [14, Exercise 51(b)]). A module M is said to have the summand intersection property if the intersection of any two direct summands of M is a direct summand of M . This definition is introduced by Wilson [18]. Dually, Garcia [10] consider the summand sum property. A module M is said to have the summand sum property if the sum of any two direct summands is a direct summand of M . These properties have been studied by several authors (see [1,3,11,12,17],...). Moreover, the classes of C3-modules and D3-modules have recently studied by Yousif et al. in [4,20]. Some characterizations of semisimple rings and regular rings and other classes of rings are studied via C3-modules and D3-modules. On the other hand, several authors investigated some properties of generalizations of C3-modules and D3-modules in [6,13]; namely, simple-direct-injective modules and simple-direct-projective modules. A right R-module M is called a C3-module if, whenever A and B are submodules of [6] if the submodules A and B in the above definition are simple. Dually, M is called a D3-module if, whenever M 1 and M 2 are direct summands of M and M = M 1 +M 2 , then M 1 ∩M 2 is a direct summand of M . M is called simple-directprojective in [13] if the submodules M 1 and M 2 in the above definition are maximal. In Section 2, we introduce the notions of A-C3 modules and A-D3 modules, where A is a class of right modules over the ring R and closed under isomorphisms. It is shown that if each factor module of M is A-injective, then M is an A-D3 module if and only if M satisfies D2 for the class A, if and only if M have the summand intersection property for the class A in Proposition 2.7. On the other hand, if every submodule of M is A-projective, then M is an A-C3 module if and only if M satisfies C2 for the class A, if and only if M have the summand sum property for the class A in Proposition 2.13. Some well-known properties of other modules are obtained from these results. In Section 3, we provide some characterizations of serial artinian rings and semisimple artinian rings. The Theorem 3.2 and Theorem 3.3 are indicated that let R be a right artinian ring and A, a class of right R-modules with local endomorphisms, containing all simple right R-modules and closed under isomorphisms: (1) If all right R-modules are A-injective, the following conditions are equivalent for a ring R: If all right R-modules are A-projective, then the following conditions are equivalent for a ring R: Moreover, we give an equivalent condition for a regular V -module. It is shown that a regular right R-module F is a V -module if and only if every F -cyclic module is simpledirect-injective in Theorem 3.9. It is an extension the result of rings to modules. Throughout this paper R denotes an associative ring with identity, and modules will be unitary right R-modules. The Jacobson radical ideal in R is denoted by J(R). The notations N ≤ M , N ≤ e M , N ✂ M , or N ⊂ d M mean that N is a submodule, an essential submodule, a fully invariant submodule, and a direct summand of M , respectively. Let M and N be right R-modules. M is called N -injective if for any right R-module K and any monomorphism f : K → N , the induced homomorphism Hom(N, M ) → Hom(K, M ) by f is an epimorphism. M is called N -projective if for any right R-module K and any epimorphism f : N → K, the induced homomorphism Hom(M, N ) → Hom(M, K) by f is an epimorphism. Let A be a class of right modules over the ring R. M is called A-injective (A-projective) if M is N -injective (resp., Nprojective) for all N ∈ A. We refer to [5], [7], [16], and [19] for all the undefined notions in this paper. On A-C3 modules and A-D3 modules Let A be a class of right modules over a ring R and closed under isomorphisms. We call that a right R-module M is an A-C3 module if, whenever A ∈ A and B ∈ A are precisely the simple-direct-injective (resp., simple-direct-projective) modules and studied in [6,13]. (4) If A is a class of injective right R-modules, then M is always an A-C3 module. (5) If A is a class of projective right R-modules, then M is always an A-D3 module. Lemma 2.2. Let A be a class of right R-modules and closed under isomorphisms. Then every summand of an A-C3 module (A-D3 module) is also an Proof. The proof is straightforward. Proof. It is similar to the proof of Proposition 2.2 in [4]. Dually Proposition 2.3, we have the following proposition. Let f : A → B be a homomorphism. We denote by f the submodule of A ⊕ B as follows: The following result is proved in Lemma 2.6 of [15]. (1) ⇒ (5). We prove this by induction on n. When n = 2, the assertion is true from (1). Suppose that the assertion is true for n = k. Let X 1 , X 2 , . . . , X k+1 be summands Since the equivalence of (1) and (3) Whenever X 1 , X 2 , . . . , X n are direct summands of M and M/X 1 , M/X 2 , . . . , M/X n are semisimple modules, then ∩ n i=1 X i is a direct summand of M . Corollary 2.9. Let P be a quasi-projective module. If X 1 , . . . , X n are summands of P and P/X 1 , . . . , P/X n are semisimple modules, then ∩ n i=1 X i is a direct summand of P . On the other hand, we can check that So, by (4), A is a direct summand of M . Proof. Let f : A 1 → A 2 be an R-homomorphism with Ker(f ) ∈ A. By the hypothesis, there exists a decomposition A 1 = Ker(f ) ⊕ B for a submodule B of A 1 . Then B ⊕ A 2 is a direct summand of M . Note that every direct summand of an A-C3 module is also an A-C3 module. Hence B ⊕ A 2 is an A-C3 module. Let g = f | B : B → A 2 . Then g is a monomorphism and Im(g) = Im(f ). It is easy to see that (2) M is an A-C3 module. (3) For any decomposition M = A 1 ⊕ A 2 with A 1 ∈ A, then every homomorphism f : A 1 → A 2 has the image a direct summand of A 2 . (2) ⇒ (3) Let f : A 1 → A 2 be an R-homomorphism with A 1 ∈ A. By the hypothesis, Ker(f ) is a direct summand of A 1 . The rest of proof is followed from Proposition 2.11. (3) ⇒ (1) Let N and K be direct summands of M such that N, K ∈ A. Write M = N ⊕ N ′ and M = K ⊕ K ′ for some submodules N ′ , K ′ of M . Consider the canonical projections π K : M → K and π N ′ : M → N ′ . Let A = π N ′ (π K (N )). Then They imply is not a direct summand of M , by using a argument that are similar to the argument presented above, we can show that M 1 ∩ M 2 = N 2 ⊕ N ′ 2 , where N 2 ∈ A is a non-zero direct summand of M and N ′ 2 ∈ A is a submodule of M isomorphic to a direct summand of M . Since each module of the class A is artinian, by conducting similar constructions continue for some k, we obtain a decomposition (2) ⇒ (1). It is obvious. (1) ⇒ (3). We prove this by induction on n. When n = 2, the assertion follows from Proposition 2.12. Suppose that the assertion is true for n = k. Let X 1 , X 2 , . . . , X k+1 be summands of M and X 1 , X 2 , . . . , X k+1 ∈ A. Then there exists a submodule N of Since the equivalence of (1) and (2), π(X k+1 ) is a direct summand of M and, therefore, 14. Let F be any nonzero free module over Z and A, a class of all free Z-modules. It is well known that F is a quasi-continuous module and F is not a continuous module. Thus, F is an A-C3 module and satisfies the property: there exists a submodule N ∈ A of F that is isomorphic to a direct summand of F is not a direct summand. By (3), the image of the homomorphism π| (C⊕T )∩B • σ| H : By the modular law, The implication (1) ⇒ (5) is proved similarly to the argument proof of Proposition 2.13. (2) Suppose that A is a submodule of N such that A ≃ S with S a submodule of N and S ∈ A . As in (1), we see that (1), M 1 = M . Therefore, It follows that S ≃ E 2 is an injective module. Thus A is a direct summand of N . (3) We show that N has no a nonzero direct summand S with S ∈ A. Assume on the contrary that there exists a non-zero summand S ⊂ d N with S ∈ A. As, in (1), and consequently S = 0, a contradiction. If K = K 2 , then K 1 = 0 and so S ⊕ M = M ⊕ K. Therefore, K ∼ = S and hence K ∈ A, a contradiction. Recall that a module is uniserial if the lattice of its submodules is totally ordered under inclusion. A ring R is called right uniserial if R R is a uniserial module. A ring R is called serial if both modules R R and R R are direct sums of uniserial modules. (1) R is a serial artinian ring with J 2 (R) = 0. Proof. (1) ⇒ (2) Assume that R is an artinian serial ring with J 2 (R) = 0. Then every right R-module is a direct sum of a semisimple module and an injective module. Furthermore, every injective module is a direct sum of cyclic uniserial modules. Let M be an A-C3 module. We can write M = (⊕ I S i ) ⊕ (⊕ J E j ) where each S i is simple if i ∈ I and ⊕ J E j is injective where each E j is cyclic uniserial non-simple if j ∈ J. Note (1) R is a right V-ring. Proposition 3.6. Let A be a class of right R-modules and closed under isomorphisms and summands. Then the following conditions are equivalent: (1) All modules A ∈ A are projective. Proof. Then M/(M 1 ∩ M 2 ) is a projective module. We deduce that M 1 ∩ M 2 is a direct summand of M . It shown that M is an A-D3 module. (2) ⇒ (1). Suppose that A ∈ A. Call ϕ : R (I) → A an epimorphism. Then R (I) ⊕ A is an A-D3 module. By Proposition 2.6, A is isomorphic to a direct summand of R (I) . Thus A is a projective module. (1) R is a semisimple artinian ring. Let M be a right R-module. M is called regular if every cyclic submodule of M is a direct summand. A right R-module is called M -cyclic if it is isomorphic to a factor module of M . Lemma 3.8. Let F be a regular module. Assume that A = 0 is a small finitely generated submodule of the factor module F/F 0 for some submodule F 0 of F and A the class of all modules isomorphism to A. Then there exists a F -cyclic module M and satisfies the property: there is a submodule N ∈ A of M that is isomorphic to a direct summand of M and not a direct summand. A module M is called a V-module if every simple module in σ[M ] is M -injective (see [19]). R is called a right V-ring if the right module R R is a V-module. Theorem 3.9. The following conditions are equivalent for a regular module F : (2) Every F -cyclic module M is an A-C3 module where A is the class of all simple submodules of M . (2) ⇒ (1). Let S ∈ σ[F ] is a simple module and E F (S) is the injective hull of S in the category σ[F ]. Assume that E F (S) = S. As E F (S) is generated by F , there exists a homomorphism f : F → E F (S) such that f (F ) = S. Then S is a small submodule of f (F ) ≃ F/ Ker(f ). Call A the class of all modules isomorphism to S. By Lemma 3.8, there exists a F -cyclic module M and satisfies the property: there is a submodule N ∈ A of M that is isomorphic to a direct summand of M and not a direct summand. We infer from Proposition 2.15 that M is not an A-C3 module. This contradicts the condition of (2).
2016-09-13T21:07:29.000Z
2016-09-13T00:00:00.000
{ "year": 2016, "sha1": "d647e53c49de0b53188d4d78675f34ee963ec019", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "d647e53c49de0b53188d4d78675f34ee963ec019", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }